Our Approach
Quick turnaround with progressive prototyping
We understand our clients’ problems and develop appropriate solutions quickly – usually providing a working prototype for key computations within weeks. This allows our clients and us to validate our approach, and more importantly, the correctness of our clients’ specifications, right from the start - while it is still relatively easy to adjust the system.
The essence of our progressive prototyping approach is to interview subject matter experts who have a thorough knowledge of the enterprise, select a business process, build a prototype, review the prototype with stakeholders, and then modify and/or extend the prototype as appropriate. The process is repeated until all functionality is integrated into the prototypes, which then mature to become the production system.
Flexible metadata-driven architecture
We understand that systems need to evolve as business processes change. Organizations are not static, and neither are their processes. It has been the unfortunate truth for many organizations that systems are obsolete and no longer support business needs by the time they are implemented.
We use a flexible metadata-driven architecture that significantly reduces the need for software releases. Rather than implementing the business logic in the code of the system, it is instead expressed as metadata; when business rules change, the changes are made to the metadata instead of the code. This approach allows business logic changes to be made quickly and without the need for a development cycle. This approach also facilitates transfer of ownership of the system from developers to analysts. We train our client’s personnel to own, maintain and update the systems we develop so that the solutions will endure after the project is over.
Efficiency and transparency
We have built very efficient and transparent complex data analysis systems. For example, the national gross domestic product (US GDP) system that we built for the Bureau of Economic Analysis (BEA), is more than 25 times faster than its predecessor. In addition, the modernized system features a suite of tools that allows analysts to track the data from its source through the computation process – in effect providing an actual audit trail – a capability not previously available. This audit trail can be used to explain the computation and is very useful in tracking down sources of unexpected results. These tools are mostly independent of the particulars of the system, and can be easily customized to fit the specific needs of each client.
The ability to easily formulate alternate business logic scenarios through metadata and explain the results through audit makes it possible for analysts to experiment with what-if scenarios without the need for additional development.
Requirements
Good systems require good requirements and specifications. Ambiguous, incomplete, inconsistent and generally poor requirements or specifications result in inferior systems, yet eliciting high quality ones can be a serious challenge. We address this by establishing an integrated team with our clients’ key stakeholders and subject matter experts, and then using an iterative progressive prototyping process: listen to the business story, build a prototype implementing its essentials, present this initial prototype to the client, get and incorporate feedback (which often includes changes to the requirements and specifications due to the client’s own better understanding) and repeat the process. The final requirements and specifications are in effect developed concurrently with the system. The fact that most of the business logic in our systems is implemented as metadata also makes the requirements and specifications formal and unambiguous.
DBMS as a computational platform
Systems dealing with large datasets must keep pace with user demands without buckling under the load. Most systems developed by Omnicom are built using Microsoft SQL Server as an underlying platform where the business logic is implemented directly in the database layer using stored procedures with callouts to C#, Python, and R as needed. A primary benefit of embedding business logic in a system’s database layer is the performance savings associated with not having to move large quantities of data in and out of the database. When computational logic is implemented in the same place as the data, great performance efficiencies and flexibility can be gained.
We also recognize that the business logic of many systems can be effectively modelled using tree and graph data structures. We have developed novel and efficient SQL implementations for a variety of breadth-first tree and graph algorithms.