The most successful data migrations are those completed by an organisation that has done it before and typically, no two migrations are the same, even if the existing and target applications are from the same vendor!
Unfortunately, in Australia the selected vendor and their chosen SI downplay the detail and the difficulties that arise when migrating the data from old to new applications. They instead focus on the functional comparisons and benefits of the new cloud software and suggest the migration of the data is a forgone conclusion, when in fact the data migration will become the most difficult part of the move from old to new software. If you think about it from the following perspective, it should help you understand the complexities in the data transition from old to new software.
When you were deciding on the new software you would have reviewed how well the proposed vendor’s software satisfied your functional requirements and you would have also looked for significant improvements in how the new software performed your existing functions or improved on them. These improvements or differences in how the proposed software performed your business processes is what causes the data migration challenges. In essence both the old and the new software have an underlying data model and they will be different, most often very different and its these differences that create the significant data migration challenges.
As already stated, these challenges have a significant impact on the data model configurations during a migration and these impacts are the critical items that if not discovered and resolved prior to migrating the existing data to the new application, will cause failures at testing and extend the time to finish and increase the spend on the project! By how much depends on the data and the business rule differences. Historically in most instances they have been significant!
For an enterprise implementation this is because both Financial or ERP applications have a degree of functional and configuration change unique to every organisation. These business rules cause significant translation challenges in the data when moving the data from old to new cloud based financial applications.
Notwithstanding the data model differences, there are in every migration several items that are the same and should always be adopted. Some of these items are: define your architecture first, agree on the number of check points for data validation, discover all business rules logic that might affect the migration, define key mappings for the data, resolve all data transformations and translations from the old to new data model, test data often and correct quickly and decide if the final migration is a big bang or gradual approach.
One of the most important decisions to reach in the early part of the migration project is the overriding architecture. Please see below for a Simple Migration Architecture Overview.
In the diagram below, the data movement is shown as a file, however, there may be other mechanisms more appropriate for moving the volumes and nature of the data involved in the migration, for example, a set of exposed database tables. As shown each adapter is unique to the source application.
After the data has been extracted from the source application, it is moved into the Common Data Store (CDS). The transformation tool used to do this is dependent on the site requirements and budget. Generally, an ETL tool is used to load the CDS, although, a combination of custom code and native database tools may also be used to achieve the same results. The transformation from the application data format into the common data structure is unique to each source format, however the delivery of the data into the CDS is then common to each of the data types.
Common business and validation rules are applied to the data within the CDS. These rules are normally run against the specific object type, regardless of its source. Within the CDS, key values such as Policy number / Product ID / Customer ID / Chart of Account Numbers / are cross-referenced into application neutral values. The outbound processing will then translate these keys into the value required by the target application should a relationship be defined. The cross-reference is dynamic, so as further relationships are defined, they can be updated within the CDS.
The implication of the dynamic cross-reference technique used to manage key relationships is in the instance of the migration to the target system(s). Cross-reference data may be created by the target system (such as when a new data object instance is created and the key automatically assigned) and the data created during the import process can be used to feed the relationships back into the CDS. This means that should the data need to be extracted to a peripheral system or another target, the target values can be included in the data stream as required.
Novon (including Lonispace acquisition) has seen migration data from one application to another for more than 19 years and has in that time completed many successful data migrations. From on-prem back office financial applications to on-prem supply chain ERP solutions and more recently from on-prem Financials / ERP applications to cloud based ERP / Financial applications.