Warning: Creating default object from empty value in /home/johnnz/public_html/wp-content/themes/simplicity/functions/admin-hooks.php on line 160

The Pathology of Business & Data Modelling

Before you start building you next business or data model, find out what it was that infected or killed your last one!

Business and data models continue to be built in enterprises year after year.  All of them come to life after a great deal of pain and effort.  Most of them cause yet far more pain while they are alive and then, mercifully, they die. A blessed relief for all concerned.

The enterprise then realises that the pain of living without them is greater than the pain of living with them, and once again begins the tortuous process of giving birth to what is doomed to be yet another huge, deformed monster.

Why does this happen?

Insanitary Practices Cause Infection

Because most enterprises are really unhealthy environments in which to conceive these models.

Essentially, they are insanitary.  There are so badly infected with day-to-day bad practices that no model can can stay healthy for any length of time.  Some of these infections will prevent the models being born at all.  Paradoxically, this is often a blessing for the enterprise.  It would have been far worse had the infected model survived.

The greatest curse for any enterprise is a model that has been so badly infected by the unhealthy environment, lack of fundamental knowledge, lack of standards and appallingly bad practice that it has, from day one, started to mutate and grow into some grotesque, alien life-form.  This starts to absorb huge amounts or energy even before it is born, and ever more so after it is!

Magic Medicine

Amazingly, a multimillion dollar industry has grown up to support these monsters.  Yes, to support them!  The consultants from these industries will tell enterprises that they have done nothing wrong, that all models turn out like this.  They tell them that process, data and IT are complex things. It is not possible to talk of these in terms of ‘zero defects’, or ‘getting it right first time’.

They acknowledge that what the enterprise has spawned is indeed grotesque, but to be expected.  However, fear not, they can help.  Can they cure it or reverse it?  Well, no. Nobody can. However, what they can do is even better.  They can support the enterprise in looking after and nurturing it.

Yes it will grow bigger and yes it will create more inefficiencies, costs and errors. Happily, the consultants have, for a price, unlimited amounts of really powerful medicine in the form of ‘magic’ software and a highly paid specialist team to help the enterprise administer it for as long as the life-form survives which, with their help, will be a very long time.

Is There No Hope?

Is this really what you want for your enterprise?  Is there no alternative? Yes, there is and it is essentially simple.

Firstly, you have to completely sanitise your enterprise.  You have to get rid of all bad standards, attitudes and practices – and of the people who perpetuate them.

What are these bad standards, attitudes and practices?

I will describe all of these and their antidotes in the next two articles on The Pathology of Business & Data Models.  The first of these will be on Function and Process  models and then on Data – covering Data Quality and Master Data Management.

Please Share

If you liked this post and think that it could be of value to a colleague or friend please feel free to share it by clicking on one of the social media links below

4 Responses to “The Pathology of Business & Data Modelling”

  1. Lawrence Gingold April 6, 2012 11:42 am #

    I look forward to the further discussionson this subject. For my experience, I am not an IT specialist. I am a business transformation consultant who relies on business process disciplines and frameworks. More improtantly, I see that while most business process leaders look to finding the “best fit process” which professes to reduce variation and thus be most efficient, business process (unlike manufacturing process) needs to be capable of actually managing variation and be dynamic in applying new rules, and changes as they happen.

    I am currently developing strategic and adminsitrative processes for a client. Both of these require attention to variations or multiple scenarios. I am currently designing the data model for these processes. The key in developing for me is to understand the data from the “outside in”. That is understand the outcomes from the process in a data requirements, define the sources (inputs) into the processes and the associated rules. This seems to allow me to define the activities and develop what the process needs to derive or apply in data to achieve the outcome. Once this is clarified I am more clear on sources and definition of the data model and can communicate what I need to the IT database developer. It also seems to allow me to do something you stressed in the beginning, clear out the current data model inaccuracies particularly in source of truth versus repository of truth definitions.

    Your discussion is exciting and I expect it will assist me in developing even better tools to communicate with IT on the datamodel requirements. Thanks for starting this.

  2. Todd Everett April 1, 2012 1:46 am #

    John:

    Looking forward to the next two posts. One thing I am seeing now as I attempt to improve my conceptual modeling skills is a lack of discipline around defining entities and following the rules of data modeling. I see a lot of high level models which use collections interchangeably with entities. For example, an entity called Workforce. Another thing I am seeing is sloppy sub typing. It was just recently I began to understand the need to sub type using a single fundamentally unchanging characteristic that classifies each super type occurrence as one and only one sub type. I was recently looking at a model where I work that had a party relationship entity with valid sub types like customer, but then also had an entity called “financials” with sub types of “billing” and “payment”! To me financials is a collection of related functions, and of course billing and payment are functions. Do you often see this kind of subtle mistakes in conceptual models you encounter? Keep up the great posts – there are so few bloggers in the conceptual modeling space!

    • John Owens April 5, 2012 5:54 am #

      Hi Todd

      You are right that there is a serious lack of understanding of the the fundamentals of logical data modelling. The examples that you give demonstrate this clearly.

      There are also subtle traps into which unwary data modellers often fall. The first of these is talking about a ‘conceptual data model’ – this does not exist. There are Logical Data Models (LDMs) and Physical Data Models – also called a Database Schemas. The term ‘conceptual data model’ is an oxymoron. It is either conceptual or it is a model. If it is conceptual, then it is not a model, and is of little value to the data modeller or the enterprise.

      The second trap is having sub-types of ‘Customer’ and ‘Supplier’ for the entity called Party or Legal Entity. This is wrong for several reasons. The first, and major reason, is that they are not sub-types of Party. Whatever Role a Party plays in a transaction its attributes and relationships will (nearly) always be the same. These supposed sub-types are in fact Roles played by Party – another entity entirely.

      If the data analyst avoids that trap, they might then fail to appreciate that Customer an Supplier are in fact derivable Roles and, being derivable, ought never appear in the LDM. Customers are Parties to whom the enterprise has sold goods or services, Suppliers are Parties from whom the enterprise has purchased goods or services. By looking at all commercial transactions, the enterprise can derive which Parties played ‘Customer’ and which Parties played ‘Supplier’.

      Other Roles might not be derivable, for example, ‘Guarantor’, ‘Referee’, etc. and would need to be declared.

      I have published some models showing the different structures for derivable and declarative Roles at The Phantom Entities of Customer and Supplier.

      Regards
      John

  3. Richard Ordowich March 22, 2012 11:38 am #

    “Doing the same thing over and over again and thinking the outcome is going to be different” is one definition of insanity. “They don’t know what they don’t know” another cliché that reflects many an organization.

    When it comes to IT and in particular designing databases my experience has been that speed is valued more than quality and long term benefits. “Get it done”; and those who rise to this command reap the rewards. Designers are not rewarded for thinking but for action. Designing databases (for that matter designing anything) requires thinking. Lots of it. Experimenting and trying varied approaches requires time. Are you done? We got to get this system into production quality so it will fail fast and we can spend the next 20 years patching it together.

    Lacking of experience is perhaps another contributor to poor designs. The next team of young Turks will use the latest wiz bang, agile, hula-hoop technique to design a database like no one has ever seen before. The big data will be in the cloud, unstructured and accessible to all. It’s a miracle. When do I get on the next project, this one is ready to go into maintenance mode.

    My recommendation is that organizations should consider database design as an experiment and to refer to it that way. “We are beginning a database experiment.” Like drug research, a design should be subject to trials before it inflicts pain on its patients. The side effects should be explicitly labeled, for example “this database will slow things down or this database should not be used by other than experienced and trained data experts”.

    Every design has limitations. Be explicit about them and if someone comes up with a design and claims it has no limitations, find another snake oil salesman.

    Organizations wanting a new database design set low expectations and always meet them!

Leave a Reply