LSRetailDataDirector UserGuide
LSRetailDataDirector UserGuide
LSRetailDataDirector UserGuide
August 2009
Copyright 2009 LS Retail ehf. All rights reserved. Published by LS Retail ehf.
Contents
1 Job Scheduling / Replication .............................................................................................................. 4 1.1 About Replication ........................................................................................................................... 4 1.2 LS Retail Architecture ..................................................................................................................... 5 1.3 Methods of Replication ................................................................................................................... 9 1.3.1 Adding Replication Counter to Tables ............................................................................... 10 1.3.2 Adding Actions Generation Code to Tables ...................................................................... 10 1.4 Summary ...................................................................................................................................... 11 2 About Data Director ........................................................................................................................... 12 2.1 Modes of Data Director ................................................................................................................. 12 2.2 Concepts in Data Director ............................................................................................................ 12 2.3 Summary ...................................................................................................................................... 13 3 How Data Director Works .................................................................................................................. 14 3.1 3.2 3.3 3.4 3.5 3.6 4 Scheduler Database ..................................................................................................................... 14 Log Database ............................................................................................................................... 15 Package Flow ............................................................................................................................... 16 Data Distribution ........................................................................................................................... 19 Job Scheduling ............................................................................................................................. 20 Summary ...................................................................................................................................... 20
Setting up Data Director .................................................................................................................... 21 4.1 Prerequisites ................................................................................................................................. 21 4.1.1 General .............................................................................................................................. 21 4.1.2 Hardware ........................................................................................................................... 21 4.1.3 Software ............................................................................................................................ 21 4.1.4 Security Considerations .................................................................................................... 22 4.2 Installing the Data Director ........................................................................................................... 22 4.2.1 Installing the Microsoft Dynamics Application Objects ...................................................... 22 4.2.2 Upgrading Scheduler Objects in an Existing System ........................................................ 23 4.2.3 Installing the Microsoft Dynamics NAV database ............................................................. 23 4.2.4 Creating the NAV user accounts ....................................................................................... 24 4.2.5 Installing the Data Director ................................................................................................ 24 4.2.6 Configuring the Data Director Service ............................................................................... 26 4.2.7 Configuring the NAV Plugin ............................................................................................... 29 4.3 Managing the Data Director Service ............................................................................................ 37 4.4 Maintenance ................................................................................................................................. 39 4.5 Upgrading Data Director ............................................................................................................... 39 4.6 Troubleshooting ............................................................................................................................ 40 4.7 Summary ...................................................................................................................................... 40
Setting up Replication ....................................................................................................................... 41 5.1 Setting up Distribution Location .................................................................................................... 41 5.2 Setting up Scheduler Job ............................................................................................................. 44 5.3 Setting up the Scheduler .............................................................................................................. 50 5.3.1 LS Scheduling ................................................................................................................... 50 5.3.2 NAS Scheduling ................................................................................................................ 50 5.4 Replicating Objects ....................................................................................................................... 56 5.5 Replicating Files ........................................................................................................................... 58 5.6 Replication Using Transaction Server .......................................................................................... 59 5.7 Data Distribution ........................................................................................................................... 64 5.7.1 About Data Distribution ..................................................................................................... 64 5.7.2 Setting up Data Distribution ............................................................................................... 65
LS Retail ehf. Hofdatun 2, 105 Reykjavk, Iceland Tel: +354 414 5700 Fax: +354 414 5710
Optimizing Replication ...................................................................................................................... 80 7.1 7.2 7.3 7.4 7.5 Preload Actions ............................................................................................................................ 80 Tables to Replicate ....................................................................................................................... 80 Fields and Tables to Exclude From Replication ........................................................................... 85 Specifying General Replication Information ................................................................................. 86 LS Configuration Use Cases ........................................................................................................ 87
LS Retail ehf. Hofdatun 2, 105 Reykjavk, Iceland Tel: +354 414 5700 Fax: +354 414 5710
1
1.1
Overview
A Computer Database is a structured collection of data in the form of records stored in a computer system. A computer database relies on application software to organize the data it needs to store. It is quite possible that multiple transactional databases are maintained in disparate locations due to reliability, accessibility, fault tolerance etc. These databases need to get consolidated into central database for reporting purposes or vice versa, central database needs to distribute data into multiple transactional databases. If a database can log its individual actions, it is possible to create a duplicate of the data in real-time. The act of inserting, updating and deleting same information in a database based on changes in another database is called replication. Replication is a process of creating identical data records from one database to one or more other databases. The aim of replication is to keep copy of similar data on the same or on a different platform and synchronize it to bring consistency. Usually, one database is the main database, or a head office, where master data is maintained and replicated to other databases.
Chapter 1 - Job
Scheduling / Replication
1.2
LS Retail Architecture
LS Retail supports the following architecture 1. Online Back office and POS are online with head office that is POS and back office users are accessing head office database.
FIGURE 1.1 1: ONLINE ARCHITECTURE Following are few advantages of online architecture Full access to Microsoft Dynamics NAV solution Real-time access to all data No overhead of data replication needed
Following are few disadvantages of online architecture Limited number of concurrent users at head office database Highly dependent on communication lines
Chapter 1 - Job
Scheduling / Replication
2. Online / Offline Back office users are accessing head office database for back office transactions and POS is having its own database for all transactions.
FIGURE 1.1 2: ONLINE/OFFLINE ARCHITECTURE Following are few advantages of online/offline architecture POS is independent POS can use online services Back-office has real-time data
Following are few disadvantages of online/offline architecture Data Replication to and from POS Reduced access to Microsoft Dynamics NAV solution
Chapter 1 - Job
Scheduling / Replication
3. Mixed store In this architecture, few stores may be online with head office and few may be offline. POS may be online with store server or may have its own database.
FIGURE 1.1 3: MIXED STORE ARCHITECTURE Following are a few advantages of mixed store architecture Less data processing at head office Sales History available at store If store is online, Back-office has real-time data
Chapter 1 - Job
Scheduling / Replication
4. Standalone store In this architecture, all the stores are accessing their local database and there is very limited online access to head office database. Here, POS may be online with store server or may have its own database.
FIGURE 1.1 4: STANDALONE STORE ARCHITECTURE Following are a few advantages of Standalone store architecture Store database has full visibility of store data
Following are a few disadvantages of Standalone store architecture All the data needs to be replicated to and from store database which may take longer time
When any of the databases is offline, data needs to be replicated to and from that database.
Chapter 1 - Job
Scheduling / Replication
1.3
Methods of Replication
To replicate data in a table from database A to the same table in database B, two general methods can be used The program compares data of two tables in database A and database B to find the differences between them. It changes the data of table in database B and makes it identical to the data of table in database A. This method may prove to be slow if there are too many records in the tables. The program knows before replication starts which data has been inserted, updated or deleted in database A. The program replicates only the corresponding changes in database B.
A replication process is required to automatically replicate the changes made in one database to another. The replication process in LS Retail has following main functionalities: It allows replication of data between two databases. It allows to schedule the replication of each table at user defined intervals. It allows controlling data distribution; that is, which records are to be replicated where. It allows configuring which fields in a table are to be replicated depending on their functionality. It offers several methods of replication, whether replication time is important, whether there is a need to extend replicated data from databases to POS terminals, or a simple replication, all depending on the tables being replicated. It allows replication of data between two databases that do not have the same table and field structure. LS Retail provides following methods for replication Normal In case of normal method, data is replicated between the two tables by comparing and making them identical. It reads the records from the table (From-Table) in source database, finds the corresponding records in the table (To-Table) in destination database, and makes them identical. If the records in From-Table do not exist in the To-Table, the replication process adds them to the To-Table. The replication process reads through the records in the To-Table and deletes them if they do not exist in From-Table. Normal replication of a table from the head office to stores leaves no trace of which records have been deleted in the table in the stores databases. Therefore, if the POS terminals are not online in the store, and a given table has to be replicated to the POS terminals in the store, the store's database cannot tell the POS terminals database about which records to delete. It is therefore not recommended to replicate tables with normal method that is sent to the POS terminals. Normal with Replication Counter A normal replication with Replication Counter limits the number of records replicated each time. A new field that acts as Replication Counter (datatype integer) is maintained in the table. Replication Counter value is incremented when there is any insertion or modification in the records of that table. The value is a number one higher than the Replication Counter value that was given to the last record inserted or modified in the table. The data in the field Replication Counter is compared with replication counter in last replication (This field is defined at subjob level) and the records with greater replication counter are replicated. This method does not support delete. Records that are deleted from a table replicated with replication counter will not be deleted from the table in the database replicated to. The replication counter for record being deleted from source database cannot be compared with replication counter of last replication as the record does not exist in the source database.
Chapter 1 - Job
Scheduling / Replication
1.3.1
If a table needs to replicate data with replication counter which has no field that is suited to act as replication counter, a replication counter field should be added to the table. To Add Replication Counter to a Table
1. 2. 3. 4. 5. 6. 7. 8. Click Tools, Object Designer. Browse to the table, to which the replication counter field is to be added. Click Design. At the bottom of the table, press F3 to add a new field. Fill in the Field Name field (for example Replication Counter). In the Data Type field, select the Integer option. Close the table, save and compile. Then click Design again. Click View, Keys to see the Keys window. Add the new field as a key to the table. Press F9 to see the C/AL Editor window. In the Replication Counter - On Validate () function, add a global or local variable that denotes the table itself. Then add the following code (TableName is the name of the variable just created): TableName.SETCURRENTKEY("Replication Counter"); If TableName.FIND('+') THEN "Replication Counter" := TableName."Replication Counter" + 1; ELSE "Replication Counter" := 1; 9. In the C/AL Editor, in the OnInsert () and OnModify () triggers for the table, add the following code: VALIDATE("Replication Counter"); 10. Save and compile the changes. 11. Look for all instances in the code in the database objects where records could possibly be inserted into the tables or existing ones modified, and make sure .INSERT and .MODIFY functions are called with parameter (TRUE), that is to make sure the triggers are run.
By Actions Whenever there is any change (insert, modify, delete or rename) in the records of a table, the system keeps track of those changes in Preaction table. Using the By Actions method, the system uses Preaction/Actions table to replicate data from source table to destination. The system uses the Preaction/Actions table to send only the changes that have been made in the From-Table ID to the To-Table ID. This method limits the number of records that need to be replicated.
1.3.2
If a table needs to be replicated by action having no code written to generate actions on different triggers, code can be added to different triggers of the table.
Chapter 1 - Job
Scheduling / Replication
10
Actions Management
1.4
Summary
This section explains about Data Replication and concepts of the Data Replication. Following are the topics discussed in this section: About Replication LS Retail Architecture Online, Online/Offline, Mixed Store and Standalone. Methods of Replication Normal, Normal with Replication Counter and By Actions. Adding Replication Counter to a table Adding actions generation code to tables
Chapter 1 - Job
Scheduling / Replication
11
The Data Director is a product of LS Retail ehf., the developers of LS Retail Back Office system and LS POS application. The Data Director is closely tied to these applications and plays an integral part in enabling the system to function as a whole in a distributed environment. The Data Director is an application specialized in moving data between databases in a fast and efficient way. It can easily move data between different databases such as NAV and MS SQL Server. Generally, if the database supports Microsoft ADO interface, Data Director shall be able to use it. The benefit of using Data Director is that there is no programming involved in order to replicate data. Data Director is a flexible tool that can be adapted to a variety of data transfer scenarios. The most commonly used configurations are for: Moving data between the head office, stores and POS Terminals in a Microsoft Dynamics NAV based retail organization.
2.1
Data Director works in three different modes Data Director This is the regular flavor of Data Director. This mode is used to send data between databases. This mode is non-interactive. nd 2 Stage Data Director This mode of Data Director is used to forward the data packages being created by Data Director Service running in Data Director Mode. Transaction Server This is interactive mode of Data Director and used by POS to make queries into remote databases.
2.2
Following is a list of terms and titles often used in connection with the Data Director DBServer.exe - This is the server component of the Data Director that handles requests to access different database systems. The DBServer is run as a service on the host computer. TransAutomClient.dll (also referred as DD Client) This is used by LS Retail for sending queries to the Data Director and the Transaction Server. This is the automation server being called from LS Retail NAV to communicate with DBServer.exe. Database Plugins - The Data Director uses a plugin to connect to different types of databases. The plugins can be regarded as drivers that allow the Data Director to read from and write into the different types of databases. Package - A package contains the data being transferred between the databases. A package has a destination Data Director and a destination database, which can be one or many, depending on how it has been created. This essentially represents a unit of work for the Data Director. Host Computer - The computer on which the Data Director (DBServer) service is running. Source Database - This is the database from where data need to be read. Destination Database - This is the database where data need to be written. Log Database - This is the database used by Data Director to keep track of its tasks and the packages it is currently working on. Scheduler Database - This is the Microsoft Dynamics NAV database that contains all the details regarding tasks that the Data Director should perform (where to send, when to send, which tables, which fields etc.). The Data Director log and the Scheduler can also be configured to reside in the same database. Scheduler - The Scheduler is used for scheduling Data Director activities at predefined intervals, such as sending sales data to head office database every hour. The Scheduler can be run either on Microsoft Dynamics NAV client or using Microsoft Dynamics Navision Application Server (NAS). DD - An acronym for the Data Director
Chapter 2 - About
Data Director
12
2.3
Summary
This section explains about Data Director, Data Director Modes and terminology used. Following are the topics covered in this section: About Data Director nd Modes of Data Director Data Director, 2 Stage Data Director and Transaction Server. Concepts in Data Director
Chapter 2 - About
Data Director
13
The main functionality of Data Director is to replicate data between two or more databases as efficiently as possible. It accomplishes this by aggregating data into packages thus minimizing the amount of data transmitted over the network. These packages are processed in a multicast like way, enabling the Data Director to handle a very high count of end points. The Data Director is run as a service and listens to incoming requests or packages. The DD Client is used by LS Retail application in order to interact with the Dbserver service and tells about what to do. There are two types of interactions: request to read data from the source database, and request to write data to the destination database. If the Data Director receives a read instruction, it will start by connecting to the source database using the appropriate plugin. It then proceeds to read data from the database and stores it in a package. The package can contain data from many different database tables. Once the requested data is read, the Data Director has two possibilities. The first is to write the data in the package into destination database, maybe by using another plugin. This provides an easy and convenient way to transfer data between different databases, since the Data Director converts the data along the way, making sure the destination database understands it. The second option is to forward the package to one or more Data Directors. Once the package is received by the receiving Data Director, it can proceed to write the contents of the package into one or more destination databases. The feature of being able to forward one package to more than one Data Director is most useful in the retail environment where price changes or updates of some product items need to distribute to some or all the stores. This is done automatically by the Data Director once configured, relieving the user to focus on store operations. The DD Client can connect to a Data Director Service running on other host computers. This makes it easy to create a network of Data Directors that can be controlled from one central location.
3.1
Scheduler Database
The Scheduler Database (also known as Design Database) is included in the Microsoft Dynamics NAV application. It stores information that DD Client needs to communicate with the Data Director, such as access information, logins and passwords, table structure in different databases etc. The scheduler database needs to contain complete information required to replicate data from source to destination database. The scheduler database also contains data that describes the structure of the databases with which communication needs to be done. Based on this data, information to the DD Client can be provided to send requests to the Data Director. This simplifies the generation of requests. The requisite is to fill in the required information in the scheduler database and run scheduler job in order to generate requests with the DD Client. In most of the cases, data would be transferred from multiple tables at a time. This data transfer would be configured for scheduled replication. This is done by running scheduler to start the transfers, hence named as scheduler database. Microsoft Dynamics NAV Database Server must be running to enable an access to the scheduler database for the Data Director. The scheduler database must contain all the application objects needed by the Data Director.
Chapter 3 - How
14
3.2
Log Database
The Log Database is a database containing information of to-do of Data Director. Log database consists of two tables - IncomingMessages and OutgoingMessages. These tables are stored in a database accessible by the Data Director. The IncomingMessages table contains all messages or requests sent to the Data Director in question. The information whether the data is to be sent or to receive is logged into the IncomingMessages table of log database. For example, a request from the DD Client for the transfer of data from the Customer table in database A to the Customer table in database B will be written in the IncomingMessages table. When the data is read from database A, the Data Director updates the IncomingMessages table stating that this part of the transfer is complete. The OutgoingMessages table contains the status of the outbound part of the transfer. Once the incoming part of the transfer is complete (the package is generated), the Data Director can start working on the outgoing part, which is to forward the data from database A to another Data Director or to write data into database B. When the data package is forwarded or written into the destination database, the OutgoingMessages table gets updated stating that the forward or write is complete. Once the DD service receives a data package from another DD service, it updates the IncomingMessages table and after receiving the data package, it starts writing the data into the destination database for which DD service will update the OutgoingMessages table. Under most circumstances, the incoming/outgoing tables are stored in the scheduler database. But the system can also be set up in such a way that those two entities are stored in different databases. When the incoming/outgoing tables are stored in a separate database, it is referred as a log database. It is important to note that database A, scheduler database, and log database can all be referred to the same database or can just be separate databases. The default settings of a DD service uses Microsoft Access database as log database. In a production environment, it is suggested that a Microsoft Dynamics NAV database is used as a log database which can be configured by defining the system connection of DD service as the connection string to connect with a Microsoft Dynamics NAV database.
Chapter 3 - How
15
3.3
Package Flow
When a Scheduler Job is run, a DD Client component (TransAutomClient.ocx) is used to instruct the Data Director to create a package by reading particular records from the source database. This package is registered in the table IncomingMessages in the log database. The DD Client also attaches a receiver list to the package, and the list is registered in the OutgoingMessages table. Following is a step by step explanation of Package Flow. For illustration purposes this process shows the Replication of a job having Job ID as Item. (Purpose replication of Item Master from head office to stores) 1. The scheduler at head office detects the job with ITEM as the job ID, which is due and runs it to initiate the DD Client to replicate the data. 2. The job is configured to replicate several tables, some by actions and some by normal with replication counter. The DD Client is used to inform the Data Directors which records from these tables should be added to packages. The data is divided into several packages, one for delete package and one for update package as per distribution of By Action jobs and one for all Normal jobs. If, for example, no delete actions are processed, no delete package is created. PackageNo Type 101 102 103 Action-Del Action-Update Description Includes primary key list of records to delete on destination Includes primary key list of records to read from source TABLE 3.3 1: PACKAGE FLOW 3. The DD Client informs the scheduler of the package number assigned to each package. The scheduler registers the number to the relevant scheduler log entry. In this case it will register First Package = 101 and Last Package = 103. 4. A client session on the Data Director server (the one that received the commands from the DD client) creates a query package for each package and registers it to IncomingMessages, one for each package. The receiver list for each package is added to OutgoingMessages, usually many receivers for each package. In fact it is not written into the IncomingMessages and OutgoingMessages tables right away, but added to a temporary queue where it waits for the server session to pick it up. Temporary queue is a file which is stored in the hard disk and not in database. 5. The client session triggers its system session just before the client session exits. 6. The system session wakes up and pops up all messages from the temporary queues and registers them into the Incoming/OutgoingMessages tables. If the system session is busy, it will turn to this after finishing its existing tasks. Following are the entries in IncomingMessages table before processing the packet PackageNo 101 102 103 JobID ITEM ITEM ITEM Status Received Received Received RemotePkg 0 0 0 ServerMsg 23 records affected 0 records affected 12 records affected
Chapter 3 - How
16
RemotePkg is 0 because the sender is the scheduler and not another Data Director. If the data package is created by a DD service, RemotePkg is 0. If it is received from another DD service, RemotePkg consist the package no. being given by the source DD Service. Below are the entries in OutgoingMessages table at head office before the packages are processed. Here, RemotePkg is 0 because it has not been forwarded to another Data Director yet. PackageNo 101 101 102 102 103 103 Receiver DD_S01 DD_S02 DD_S01 DD_S02 DD_S01 DD_S02 JobID ITEM ITEM ITEM ITEM ITEM ITEM Status Processing Processing Processing Processing Processing Processing RemotePkg 0 0 0 0 0 0
TABLE 3.3 3: PACKAGE FLOW DD_S01 & DD_S02 are the DD services for store 1 and store 2. 7. For each IncomingMessages entry, the query package is processed. The result is written in the data packages and the status is changed to Processed. The relevant OutgoingMessages entries are marked as Waiting when the data package is ready. 8. For each OutgoingMessages entry with the status Waiting, the Data Director compares the Receiver name to its own name and further steps depend on this comparison. If the receiver is another Data Director, the status is set as To Forward and the entry waits to be forwarded to the other Data Director. If the receiver's name is its own name, the Data Director itself is responsible for updating the database, and steps 9 to 13 are skipped. 9. For each OutgoingMessages entry with the status To Forward, the package is forwarded to the relevant Data Director. The receiving Data Director returns with the local package number, which is stored in the RemotePkg field at the sender side and the outgoing entry is marked as Forwarded. PackageNo 101 101 102 102 103 103 Receiver DD_S01 DD_S02 DD_S01 DD_S02 DD_S01 DD_S02 JobID ITEM ITEM ITEM ITEM ITEM ITEM Status Forwarded Forwarded Forwarded Forwarded Forwarded Forwarded RemotePkg 10 55 11 56 12 57
10. On the receiver side, the same procedure follows as the one that retrieves the commands from the scheduler at head office which takes care of retrieving the data package. The package is registered into a temporary queue and the system session is triggered.
Chapter 3 - How
17
11. The system session picks up all registered packages from the queue, stores them in the message tables and goes through the entries. 12. On the incoming part, there is nothing to process since these are data packages coming from another Data Director. Therefore, the status is set to Processed and the OutgoingMessages are marked as Waiting. PackageNo 10 11 12 JobID ITEM ITEM ITEM Status Processed Processed Processed RemotePkg 101 102 103 ServerMsg Ready Ready Ready
13. Now the destination Data Director is in same position as the head office was in step 8. It could possibly forward the data package to the third Data Director. It is unusual to have a setup of the Data Director like this but possible for special cases. 14. The system session goes through all OutgoingMessages and processes all Waiting packages. The data packages are imported to destination databases and delete packages will cause the relevant records to be deleted from the destination databases and the outgoing entries are marked as Done. PackageNo Receiver 10 11 12 DD_S01 DD_S01 DD_S01 JobId ITEM ITEM ITEM Status Done Done Done RemotePkg 0 0 0
Chapter 3 - How
18
3.4
Data Distribution
The built in Data Distribution mechanism in LS Retail controls the way data is distributed from the head office to the stores or from stores to POS databases. For example, the data distribution of items can be defined in such a manner that they are available only in certain stores. It is necessary to set up data distribution if different data needs to be replicated to different stores. The data distribution should be one of the first configurations to be set up while setting up a new system. The data distribution must be set up before users start entering data into the system. Data entered before the data distribution set up may or may not be distributed correctly. Using data distribution, the data is only sent to distribution locations that should receive it and not to any other distribution locations and shortens the transmitting time of data by sending only the data that the destination distribution location should receive. This can have an impact on costs related to the transmission of data. Data distribution setups can vary between different organizations. A chain of supermarkets may need a full setup for all tables used in the system whereas a small single location store may need a simpler setup, since it does not have to distribute data to other stores.
Chapter 3 - How
19
3.5
Job Scheduling
In LS Retail, data is replicated using jobs. Jobs can be scheduled to run at a specified date and time. On running the Scheduler Server, it will process scheduler jobs according to the specified given schedule. The scheduler checks the Next Check Date and Next Check Time fields and processes the job with the lowest date and time, even if the date and time has passed. In order to run a specific scheduler job, priorities can be assigned to scheduler job to ensure that it may run ahead of other scheduler jobs by the scheduler server.
3.6
Summary
This section explained the way in which Data Director works, explains the Scheduler and Log Databases, Data Director Package Flow and the introduction of Data Distribution and Job Scheduling. Following are the topics that were covered in this section: How Data Director Works Scheduler Database Log Database Incoming Messages and Outgoing Messages Tables Package Flow in LS Retail Package Creation and Package processing sequence. About Data Distribution and Job Scheduling Introduction about Data Distribution and Job Scheduling.
Chapter 3 - How
20
4
4.1
Below are a few prerequisites required for setting up the DD on Microsoft Dynamics NAV.
4.1.1
General
Basic understanding of the TCP/IP networking protocol is required. Knowledge to assign an IP address, preferably using a DNS server or the local hosts file, and to assign names to port in services file is required. Knowledge to work with Microsoft Windows Services and view events from the Event Log is required. Working knowledge of Microsoft Dynamics NAV is required. The setup of the DD requires an installation of Microsoft Dynamics NAV database server as well. Microsoft Dynamics NAV License file (.flf) is required which has the permissions to access the DD application objects within Dynamics NAV and the CFront API. Necessary permissions to install programs and to start and stop services on the computer running the DD service are required. The DD comes with a demo DD license file (.lic). This license file restricts the usage of the DD but can be used for testing and demonstration purposes. The demo license allows replication to one subnet. In production environment where data should be replicated to more than one subnet, demo license should be replaced with client DD license. Same can be procured from LS Retail on request.
4.1.2
Hardware
At least 32 MB of available RAM. The base process uses 5 - 30 MB of RAM for normal operation. The amount of RAM required depends on the number of plugins DD uses. 50 MB disk space for the base application and plugins. This is the absolute minimum space required. The additional disk space required when moving data depends on the amount of data and how frequently it is moved. There should be at least 500 MB free space on hard drive for the temporary data generated by the DD. A Pentium III class processor or better. The DD is a CPU intensive application, especially when moving data on a LAN, so that a faster processor usually means improved performance. In general, faster the computer runs the DD, better it will perform.
4.1.3
Software
Supported Operating Environments MS Windows XP Professional, 32 bit MS Windows 2003 Server, 32 bit MS Windows 2008 Server, 32 bit MS Vista, 32 bit Supported Databases MS Dynamics NAV (1.30, 2.01, 2.50, 2.60, 2.65, 3.01, 3.10, 3.60, 3.70, 4.00, 4.01, 4.02, 4.03, 5.00, 5.01, 6.00) Note:
The 5.00 version of the cfront has a known bug where it does not release database connections properly and accumulate temp files causing it to eventually hang. Use version 5.01 of the cfront instead.
Chapter 4 - Setting
up Data Director
21
The DD needs 1-3 client sessions in the database it is running from for type of activity it performs. The number of client sessions depends and may vary on the functionality used. Since the DD will be connected to one or more databases during the operation, ensure that those databases have enough free sessions to allow the DD to connect to them. The databases where the DD should connect to, need to be accessible using TCP/IP.
4.1.4
Security Considerations
Most data communication tools like DD need access to databases in order to move data between them. For security purposes, DD access to the database tables should be restricted only to tables it needs to read or write into. This is important because, unlike regular database users, the DD can effectively access any table in a database, as it is not restricted to viewing data via a graphical user interface. By choosing not to restrict the DD access to a database, there is a risk that users get access to data which they should not have access to. Most database systems allow database administrators to set up users access permissions relatively easy. It is strongly recommended to spend some time in specifying access permissions for the user account that the DD will use to access database. For example, the Microsoft Dynamics NAV security system provides a powerful feature that limits a users access to database tables only, making the user account useless to regular users since they do not have access to the databases graphic user interface. Similar features can be found in other database systems as well. If this feature is available, it should preferably be used for all user accounts which the DD uses. The DD comes with a password protection mechanism that requires the user to supply the correct password before the DD accepts any input from that user. It is strongly recommended that this feature is enabled.
4.2
The installation of the DD is divided into following steps: 1. 2. 3. 4. 5. 6. Installing the Microsoft Dynamics Application Objects Installing the Microsoft Dynamic NAV database Creating the NAV user accounts Installing the DD Configuring the DD service Configuring the NAV plugin
4.2.1
This first step requires the user to have access to a Microsoft Dynamics NAV database running on a server. The server can be either a native server or a server using an MS SQL server as the backend. This database will become the Scheduler and Log database. The Scheduler database contains the settings for the Data Director to run properly.
Note 1 This step can be skipped if you plan to use a database containing LS Retail as your Scheduler database. LS Retail already contains the application objects required in order to run the Data Director. However, you need to make sure that the version of the application objects in LS Retail matches your version of the Data Director. If not, see the section below: Upgrading Scheduler Objects in an Existing System Note 2 This step can be skipped if you already have a Scheduler database on your network. If you need only the Log Database, you can import the NFMsgTables.fob file from the Data Director\Files subdirectory once the Data Director installation is completed. You can also use the default MS Access database as your Log Database and skip the import altogether.
Chapter 4 - Setting
up Data Director
22
The installation is done as follows: 1. 2. 3. 4. Connect to the Microsoft Dynamics NAV database. Open the Object Designer and select File, Import. Locate the file named scheduler-objects.fob on the Data Director Installation CD. Click Open to import the objects.
The application objects are imported to the database by now. You need to make some changes to the database in order to make the newly imported objects available from main menu. If not, you can always run form 99001823 (Scheduler Menu) to get access to the Data Director menu.
4.2.2
The latest version of LS Retail should include the latest version of the Data Director objects but when new versions of the Data Director are released, new objects are usually included in the released package. Generally, there are no critical changes and it is sufficient to wait for the next service pack release of LS Retail. The Data Director packages include a standalone database which includes only the Scheduler, the Replicator and the Data Director as parts of the LS Retail system. This database includes all objects required to run the Scheduler and the Data Director. Most of the objects are identical to the ones in the retail system and can be imported into the retail system without conflicts. But some objects depend on other functionalities in the retail system and therefore, there is a need to remove those objects in the standalone database to compile the standalone one. These objects can be identified with the CONFLICT mark in the version list. To upgrade an existing system with a new version of Scheduler objects, follow the below steps: 1. Make a backup of all objects in the existing system. 2. Check the latest version of the scheduler objects and note down the date. Filter on *SCH* in the version list and find the latest version number. 3. Mark the objects to be exported from the standalone database. 4. Open the standalone database and mark all objects from a newer version. Note down which objects are marked as CONFLICT and unmark them. 5. Import the non conflict objects into the system. Select the marked objects, export them to a fob file and import the file into the system. 6. Manually upgrade the conflict objects. In the standalone database, look into the documentation part of each conflict object and check which changes are made after the release of the latest Scheduler version in the existing system. Modify the object in the existing system manually. Usually these are small changes, for example a new menu item, new field or change in the code. If it is changed in the code, the change should be marked. Note
Do not upgrade non conflict objects without upgrading the conflict ones.
4.2.3
Data Director requires a database which can be used as scheduler and log database which can be same as transactional database or a separate database. This step requires access to a Microsoft Dynamics NAV database running on a server. The server can be either Microsoft Dynamics NAV native server or an MS SQL server. This database will become scheduler and log database. The Scheduler database contains the settings of the Data Director to run properly. The LS Retail database can be used as scheduler database and log database which already contains the application objects required in order to run the Data Director. Default installation of Data Director and configuration of DD Service uses Microsoft Office Access database as log database.
Chapter 4 - Setting
up Data Director
23
4.2.4
One or more NAV user accounts need to be created for the DD depending on the configuration. 1. System account The system account is used to access the log database. By default, the DD uses a Microsoft Access database to keep track of the tasks it is working on. A user account does not need to be created for the log database, if the Access database is selected for use. If the plan is to use a NAV database as log database, you need to create a Microsoft Dynamics NAV user account that allows the DD to access the log tables. This user only needs access to two tables in the database: Table 99001599 IncomingMessages Table 99001600 OutgoingMessages
The DD must have permission to read and write into these tables. The connection string for log database while creating DD service should contain user and password, having permission for IncomingMessages and OutgoingMessages tables. 2. Data access accounts The data access accounts are used to access the data in the source and destination databases. These accounts are used by the DD to read from source tables and write into destination tables. The access permissions of these accounts need to be limited to the tables that the DD will read from and write into. It is also a good practice to limit the delete permissions to the tables where deletions are needed. The connection string for source and destination database defined at distribution location card should use this user id and password. Accounts with super-user privileges are often used in test environments, but should never be used in a production environment.
4.2.5
To install the Data Director, the DD installation file needs to be run, usually called LSRetailDDx.x.x.exe, where x.x.x is the version number. The installation is straightforward; just follow the instructions in the dialogs. Once installed, everything required for DD to replicate between Microsoft Dynamics NAV and MS SQL Server databases is available since all the needed plugins are included by default. Note
In Microsoft Vista, the user installing/configuring DD needs to have local administrative rights.
Locate the DDServer_XX.exe file in the Setup directory on the Data Director installation CD. Start the application. Click on Yes to start the installation. The following Setup Wizard window will appear:
Chapter 4 - Setting
up Data Director
24
FIGURE 4.2 1: DATA DIRECTOR INSTALLATION Click on Next to continue. The following screen will appear:
FIGURE 4.2 2: DATA DIRECTOR INSTALLATION Select the directory where you want to install the Data Director. The default path is C:\Program Files\LS Retail\Data Director, where user can change the path. Click on Next and the following screen will appear:
FIGURE 4.2 3: DATA DIRECTOR INSTALLATION Click on Close to complete the installation.
Chapter 4 - Setting
up Data Director
25
4.2.6
Once the DD is installed, one or more servers need to be configured: 1. Open the Data Director Setting tool located in the Data Director Start menu. It is a wizard-like interface, used to manage everything concerning the DD servers, like adding or removing servers, starting and stopping them and configuring their parameters. When the tool is run, the window Register Data Director Service appears, see below:
FIGURE 4.2 4: DATA DIRECTOR SERVICE CONFIGURATION 1. In the field Server Name, enter the name of the new server which by default contains the computer name. Any name can be assigned to the server. Just make sure that the name used will resolve to the computers IP number by adding it to DNS server or the hosts file. It is recommended that computer name should not be used as DD server name and the name should not conflict with any other windows service running on that machine. 2. Click Add to add the server to the list called All Servers. To remove the server, select it from the All Servers list and then click the Remove. To Start a Server 1. Select the server from the server list and then click on Start. To Stop a Server 1. Select the server from the server list and then click on Stop. DD Service should not be started while defining the service. It should be started after defining all the parameters required for that service. Licenses Data Director comes with a test license that can be used in a test environment, but once the DD is deployed at the production site, it is recommended to have a valid customer license that includes the number of subnets the site will use.
Chapter 4 - Setting
up Data Director
26
Obtaining a Customer License A valid license key for DD can be obtained by registering an incident at LS Retail support (LS Partner portal). The information that must be sent is: Customer Name, Microsoft Dynamics NAV License number as well as number of stores.
Chapter 4 - Setting
up Data Director
27
To Load a New License 1. On Register Data Director Service, a new license can be loaded by clicking the Load License. If the license loaded is valid, it will show DD license valid message below the Load License. 2. Restart the DD service to activate the new license by selecting DD service from service group. If a simple setup is run using just one DD server, there is no need to configure anything else and you can start the service right away by clicking Start. This will start the DD running as an NT service which will restart automatically when the computer starts. Note
The Start and Stop buttons will reflect the current state the selected server is in by disabling or enabling the available action.
Chapter 4 - Setting
up Data Director
28
4.2.7
In the same wizard, Plugins/Controls group are available in the upper right corner having three buttons, one for the Dynamics NAV plugin, one for the MS SQL Server plugin and one for the Client Controls. First two buttons will be enabled or disabled according to plugins included in the license and are used to configure the respective plugin. For example, if the license only contains the NAV plugin, the Dynamics NAV button will be enabled and the MS SQL Server button as disabled. Since the test license includes all available plugins, both buttons are enabled. The Client Controls is always enabled. Dynamics NAV Plugin On click of Dynamics NAV, the screen with the options to configure NAV plugin will be shown as:
FIGURE 4.2 6: MICROSOFT DYNAMICS NAV PLUGIN SETTING To Load NAV License 1. Click on Load NAV License to load a new NAV license which makes it available for the DD to use. Once the license is loaded here, the Administration is used in the NAV scheduler database to set up the distribution of the license to other DDs. 2. A license can be installed manually by putting it in the cfront plugin root folder and the version folder that matches the version of NAV.
Chapter 4 - Setting
up Data Director
29
Other Settings Parameters at NAV Plugin Following are the other parameters for the FinPlugin, that are used to access Microsoft Dynamics NAV databases (both native and SQL). Property Codeunit Permission Description This specifies the codeunit in NAV from which the plugin can inherit permissions. This can be important in cases where there is an end-user license and a need to replicate data into a writeprotected table such as the Item Ledger Entry. Codeunit 99001483 contains permissions to write into most of the write protected tables. If left as 0, the plugin cannot write into protected tables. It provides a fix for an error in NAV that occurs when the source and destination databases are not using the same language. It corrects illegal dates so it can be written into NAV date field. It only affects NAV using SQL Server as the database. If this is checked, packages will not cause an error even if some included fields do not exist in the destination table. They are just ignored. This fixes Decimal Rounding where decimal values may be replicated with large number of decimal places. Type in the number of decimal values you want to replicate in the Round To Decimal field. Enables to load a new NAV license for the DDs use.
Multilang. Workaround
TABLE 4.2 -1 MICROSOFT DYNAMICS NAV PLUGIN SETTING Client Controls The Client Controls button will open up the screen that contains the Register and Unregister buttons which will register or unregister the ActiveX client controls that the NAV client uses to talk to the DD.
Chapter 4 - Setting
up Data Director
30
Register Client Controls 1. Click Register to register the ActiveX client control. Unregister Client Controls 1. Click Unregister to unregister the ActiveX client control. Normally, there will be no need to manually register these controls as it should be done at installation, but under certain circumstances while upgrading the DD, the old clients might be locked when the upgrade took place and the registration failed. If an error about missing components is reported while using the scheduler, try to unregister and then re-register the controls using these buttons to see if it fixes the problem. To Debug Client Control Enable Log Mode 1. Log Mode can be enabled by placing a check mark in the Log Mode box. This will enable client controls to maintain a log of what they are doing which can be useful when troubleshooting. If this option is enabled, please make sure that the Log Dir directory exists. 2. DD creates three log files and will rotate through them when the Max. Lines limit has been reached for each log. On startup, the DD will make a copy of the old logfiles by appending .old to their names. This is useful if the DD service has been set to automatically restart on failure the failure will then be in the old log files. To Specify Log Level Log Level specifies the detail level of debug information included in the Log file.Checking Error & Main usually is enough for basic debugging, but more details might be required in some cases. 1. Log Level specifies the detail level of debug information included in the Log file. Usually the Error and Main marks are enough but greater log detail might be required in some cases. Note
Log level Field & Functions will produce a lot of information in a busy system and should not be left on for long. If a log file exists, new entries will be appended to it.
Error: Logs all errors reported from DD Main: Show main functionality in DD Actions: Shows detailed actions performed within DD Detail: Show very detailed information about what DD is doing Fields: Show database field values and information. Functions: Extreme Debugging for programmers use only.
Since there are various different client components used, like DD Client and Transaction Client, each component will produce its own log file, named after the component name. Once all the settings at Register Data Director Settings are done, click Next to move forward with settings of DD Service. To Define Data Director Mode 1. The Data Director Mode drop down list configures this service instance to run as a Data Director, a Forwarder (2nd Stage DD) or as a Transaction Server.
Chapter 4 - Setting
up Data Director
31
FIGURE 4.2 8: DATA DIRECTOR SERVICE CONFIGURATION When the DD is running as a Transaction Server, the number in the Transaction Srv. Port field is used for all communications. If running as a Forwarder (2nd Stage DD), the service will only forward packages to their destinations and will not be able to receive any packages. This means that at least one service must be running in DD mode which will be working as package owner and one or more running in Forwarder mode. The forwarder can be on a separate computer but make sure that the log database where the incoming and outgoing messages tables are stored is accessible from there. The name assigned to the forwarder is the name used in the Scheduler to indicate which forwarder should handle this location and should also be defined in the DNS or hosts files. This name is to be given in the Forwarder field of the Distribution Location card. To Define Server Application Path 1. The Server Program field displays the path to the executable of the service application. This needs to be changed very rarely. To Define Ports 1. DD Port (Port used by Data Director Service), the Telnet Port (Port used by telnet) and the Monitor Port (Port used by Data Director monitor explained later) for the DD service can be defined for the DD Service shown in Server Name. The values shown below are the default values. Data Director Port Transaction Srv. Port Telnet Port Monitor Port 16750 16752 23 16751
Chapter 4 - Setting
up Data Director
32
Note
If more than one instances of the service need to be run on the same computer, these values need to be changed so that it should not conflict with other DD services. Please note that all the ports including the telnet and monitor ports should be changed. If Telnet server on the computer is run, then another port should be assigned to DD telnet interface to avoid conflicts.
On the click of Next, Data Director Specific Properties window will be opened where a number of properties need to be specified that affect the behavior of the DD. The following table describes these properties in more detail.
Description Displays the name of the service being configured Specifies the connection string the DD uses to connect to the Log Database. By default, the DD installs a Microsoft Office Access database to use as a Log Database. Dynamics NAV or MS SQL Server can also be used but then needs to specify another connection string. The path to the directory that the DD uses to store temporary (package) files. The name of the table DD uses for incoming messages. The name of the table DD uses for outgoing messages. Specifies the number of days processed incoming/outgoing messages should be kept. 0 means forever.
Chapter 4 - Setting
up Data Director
33
Timer Interval
The DD checks for packages that need to be reprocessed (communication errors) at predefined intervals. The length of that interval can be specified here. Limits the number of job records the DD will process per connection (to databases or remote DDs). Using this feature can be beneficial in systems experiencing heavy load to prevent the DD from going into a live lock. If the first job record destined to a connection has an error, the rest of the packages will be skipped for this run and the next connection will be processed. This enables the DD to handle virtually an unlimited number of waiting packages (space allowing) in linear time. As a guideline, the number should be set low when the average package size is high and be set higher when the average package size is low. Set to 0 to disable (not suggested). TABLE 4.2 -3 DATA DIRECTOR SERVICE CONFIGURATION
Data Director Connection Strings Below are several examples of connection strings that can be used in setup. In most cases, it is just a matter of copying the string listed below and changing the database name, user and password. The DD connection string consists of three sections: database specific connection string, plugin ID and a plugin-specific string. These three parts are separated by the | symbol. Each section is configured with parameters that can be different for each plugin and each database. Use different database connection strings for different database servers. Syntax: <database connection string>|<plugin id>|<plugin specific string> Following are few examples of connection strings. These are the strings that have been used in the past for each plugin. Microsoft SQL Server Provider=SQLNCLI;User ID=dd;Password=ddpwd;Initial Catalog=demo-data;Data dd;|ms|none driver={SQL Server};server=aaron;database=DemoData;|ms|none; Provider=SQLOLEDB.1;User ID=dd;Password=ddpwd;Initial Catalog=demo-data;Data dd;|ms|none Microsoft SQL Server with AdoPlugin Provider=SQLOLEDB.1;User ID=dd;Password=ddpwd;Initial dd;|ado|none Microsoft Access with AdoPlugin Provider=Microsoft.Jet.OLEDB.4.0; Director\data\msg.mdb;|ado|none
Source=ho-
Source=ho-
Catalog=demo-data;Data
Source=ho-
Data
Source=C:\Program
Files\LS
Retail\Data
Microsoft Dynamics NAV native server company=the-company;server=db.ho;user=dd;passwd=ddpwd;|fin|ndbcn@400 company=the-company;server=db.ho:12025;user=dd;|fin|ndbcn@501 Navision 3.56 server company=Demo;server=1;user=dd;|nav|none On click of Next, Data Director & Transaction Server Properties window will be shown with following properties
Chapter 4 - Setting
up Data Director
34
Property Hold Connections Idle Conn. Time Maximum Sessions Session Timeout Thread timeout Max. Forw. Threads Max. Hop Counter Socket Timeout
Description The number of connections to the source database that the DD should reserve. Specifies the idle timeout for the reserved connections in Hold Connections. Specifies the maximum number of simultaneous connections that the Transaction server can use. Specifies the timeout for queries involving the Transaction server. Specifies the timeout for threads used in connecting to remote locations. Specifies the maximum threads used when the DD is in Forwarding mode. Specifies the maximum number of hops a single package can take between DDs. This prevents endless loops if the DD names are configured wrong. Specifies how long the DD will wait for the network to finish a particular send or receive operation. Socket Timeout = 0 will disable this check. Note: This should always be set lower than the Thread Timeout to prevent the DD from killing itself if it is just waiting for the network. The password required to get an access to the DD service remotely. This is the CPU number used by DD Service. When this is checked, the DD allows transfer of files using the MonitorClient from Dynamics NAV. TABLE 4.2 -4 DATA DIRECTOR SERVICE CONFIGURATION
On click of Next, Server Debugging Properties window will be shown where various debug options for the DD service can be set.
Chapter 4 - Setting
up Data Director
35
Log Level
Description DD will not delete the temporary files in the work folder. Do not leave this on for long time otherwise you will run out of disk space and prevent the DD from operating correctly. Creates Memory dump file if DD crashes. Folder where the log files will be saved in. Make sure the folder exists. DD creates three log files and will rotate through them when the Max Lines has been reached for each log. On startup the DD will make a copy of the old log files by appending .old to their names. This is useful if the DD service has been set to automatically restart on failure the failure will then be in the old log files. Log Level specifies the detail level of debug information included in the Log file. Usually the Error and Main marks are enough but greater log detail might be required in some cases. Error: Logs all errors reported from DD Main: Show main functionality in DD Actions: Shows detailed actions performed within DD Detail: Show very detailed information about what DD is doing Fields: Show database field values and information. Functions: Extreme Debugging for programmers use only. TABLE 4.2 -5 DATA DIRECTOR SERVICE CONFIGURATION
Chapter 4 - Setting
up Data Director
36
Note
These checks should be used only for a short term for debugging purposes and do not leave these options On for long in a production environment, especially the Keep Package Files option. Otherwise disk space may run out and prevent the DD from operating correctly. The Log/Dump Dir should be a directory that exists and the log mode works in the same way as the log mode for the client controls described earlier. This is the last configuration screen of the Data Director Service configuration and can be closed by either clicking Finish or OK.
4.3
After installing the Data Director Service successfully, following are the steps in order to start the service: Right-click on My Computer. Select Manage. The Computer Management window appears.
FIGURE 4.3 1: COMPUTER MANAGEMENT Expand the Services and Applications branch. Select Services. Locate the LS Data Director entry. The window will look like this:
SERVICES
Chapter 4 - Setting
up Data Director
37
You can now start or stop the service by clicking the relevant buttons on the toolbar or by right-clicking the service entry. Start the service. Once the service is started, you must check whether it started successfully. Click on the System Tools branch and expand the Event Viewer. Select the Application branch. The window will look similar to this:
EVENT VIEWER
The event Viewer window will have an entry created by the Data Director. Double-click on the entry to view its contents. The window will look something like this:
EVENT VIEWER
PROPERTIES
The service has already started. All subsequent messages from the Data Director service will be logged in the Event Log.
Chapter 4 - Setting
up Data Director
38
There is also a faster way to see whether the Data Director Service is running. Open a command prompt and enter telnet service name where service name is the name of the Data Director Service. The window will look like this:
FIGURE 4.3 5: COMMAND PROMPT Here you can view the Data Directors status and licenses by pressing Q and L.
4.4
Maintenance
Once installed, the Data Director Service requires very little maintenance. There are few things to be kept in mind in order to keep the service run smoothly. Make sure that there is enough free space in the Log database. A full Log database means that the Data Director is unable to process any packages. If the customer wants to add new granules to license, the license needs to be updated throughout the Data Director network. When the service is restarted, the new license will be read and the Data Director will get access to the new granules. You simply need to put the new Microsoft Dynamics NAV license file in the DataDirector\Plugins\FinPlugin\Incoming directory on each Data Director on the network before restarting the service. A script can be created to simplify this task. Once the service is restarted, the new license file will be used to access all Microsoft Dynamics NAV databases.
4.5
To upgrade Data Director from 2.23 or an older version of the DD, please follow the steps below: 1. 2. 3. 4. 5. 6. 7. 8. Stop all running DD servers Take note of any configuration changes made to the DD servers Uninstall the DD server and all plugins Reboot the machine Stop all NAV clients, including NAS servers Remove the directory where the DD was installed Install the new DD version (do not uncheck any options) Configure your DD servers according to the notes from step 2.
Chapter 4 - Setting
up Data Director
39
If you are installing a revision within the Data Director 2.26 version, you can just re-run the installation after having stopped all NAV and AX clients and the installed DD servers. Even if the installation warns about removing the DD servers before you upgrade, it is safe to continue if these are not running. Note
Please note that NAV NAS acts as a normal client and needs to be stopped to enable the re-installation to go smoothly. If a client is running, it can hold locks on the DD client components, disallowing their replacement and the upgrade will fail.
4.6
Troubleshooting
The Data Director service should be ping-able The service should be running Service name should not be the computer name Service should be defined in host file (mapped with DNS) or in services file (If service is running on port other than default port) The port no. on which DD service is running should be in exception from firewall A default DD license contains only 1 subnet. The original license should be loaded if more than one subnet is involved in replication.
4.7
Summary
This section explains the Prerequisites for installing the Data Director, Hardware and Software requirements for Data Director, Data Director Installation and service configuration. Following are the topics covered in this section: Prerequisites for Data Director Installation Software and Hardware Requirement for Data Director Data Director Installation Installing Microsoft Dynamics Application Objects, Installing Microsoft Dynamics NAV Database, Creating the NAV user account, Installing the Data Director, Configuration of Data Director Service and Configuring NAV Plugin Managing the Data Director Service Maintenance Upgrading the Data Director Version
Chapter 4 - Setting
up Data Director
40
5
5.1
Setting up Replication
Setting up Distribution Location
In distributed architecture of LS Retail, there is a number of databases representing different locations like head office, stores and POS. In the global network of computers, machines having source and destination databases need to be identified which can be done by a Fixed IP address. Database server name is mapped with an IP address using DNS server/host file (recommended is DNS). Every database need to have its own logical and physical entity. The logical entity is represented by Store Card. When a store is created, the program automatically creates a Distribution Location with the same code as the store number. Distribution location is the physical address of a company in a database. It contains the connection information of source and destination databases and company in databases for replication. The distribution location can be created manually to represent any database like for POS. For InStore Management module, Store No. must be same as Distribution Location code. To open the Distribution Location Card, click LS Retail Scheduler, Distribution Location
FIGURE 5.1 1: DISTRIBUTION LOCATION CARD Distribution location has connection string which contains the connection information of the database like server name, database name, company name, user name and password to connect to the company of database. It also contains the DD Service for the distribution location, database version information and network information.
Note
Company name is case specific.
It is possible to replicate data between different Microsoft Dynamics NAV versions. FinPlugin loads the different versions of cfront.dll which are used to create connection with the databases of corresponding version. For specific version of databases, different cfront.dll (available with installation kit) is used to communicate. Distribution Location card need to have the corresponding version information of the database and path for the corresponding cfront.dll should be defined in plugin file path.
Chapter 5 - Setting
up Replication
41
1. 2.
Browse to the distribution location for which the version needs to be specified. Specify the folder where corresponding cfront.dll is available. Data Director provides different versions of cfront.dll with installation kit. The default path for cfront is C:\Program Files\LS Retail\Data Director\plugins\finplugin\cfront.
It is possible to replicate data between Microsoft Dynamics NAV databases that do not have the same table and field structure. To do so, the design of the database needs to be read which is different from the database that stores the replication jobs, that is Scheduler database. There are two options to read the design of the database, Read Design (reads the tables and fields using cfront) and Read Design with Data Director (reads the tables and fields using Data Director Service). Once the design of the table at location is read, a field transfer list for replication between different tables can be defined.
Click LS Retail - Scheduler, Distribution Location. The Distribution Location Card window appears. Browse to the distribution location for which the design needs to be read and click Functions, Read Design. A status window gives the status of the database reading. When the reading gets successfully completed, the tables of the database can be viewed by clicking Dis.Location, Dist. Location Tables. With the Data Director it is possible to read only the table design or optionally both the tables and fields. To do so, click Functions, Read Design with Data Dir.
1. 2. 3. 4.
When there are few tables that have a different design, it is faster to just read the design of the tables and fetch the fields only for those tables. To do so, click Dis. Location, Dist. Location Tables. Select the table and click Table, Fetch fields v/DD. The pre-requisite to replicate data is availability of connection to that distribution location Test Connection to Distribution Locations:
1. Click LS Retail - Scheduler, Distribution Location. The Distribution Location Card window appears. 2. Browse to the relevant Distribution Location and click Functions, Test Connection. 3. Click Functions, Test Connection with Data Dir. Select to use the Data Director.
If the connection to the location has been set up correctly the system will prompt with a message of a successful testing.
Chapter 5 - Setting
up Replication
42
FIGURE 5.1 2: DISTRIBUTION LOCATION A distribution location can have multiple sublocations (locations within a location) like in one store, there may be a number of POS and there may be a need to replicate data from head office to store database as well as to POS databases. There are two different ways to replicate data from head office to store and the POS databases of that store. One is to create a distribution location for the POS database and replicate data from head office to store and POS. Another is to create sublocations for the POS database under the Distribution Location card for the store and replicate data from head office to store with its sublocations. The sublocation provides the connection information of the POS database and is similar to a distribution location.
Chapter 5 - Setting
up Replication
43
5.2
Prerequisite A prerequisite to set up a scheduler job is scheduler job type and scheduler subjob. Scheduler job Type Scheduler job types are used for setting up objects that run different types of scheduler jobs. Various scheduler job types can be created; for example, for running replication as well as for running miscellaneous jobs (jobs which are not replicating data) in the scheduler. For running replication, a scheduler job type must be set up, with codeunit 99001484, Perform Replication Job. Select the option Single Location in the Distribution Restrictions field. Using this codeunit, data can be replicated only to single location. For running the Data Distribution, a scheduler job type must be set up, with codeunit 99001466, Data Distribution. Select any option in the Distribution Restrictions field. Using this codeunit, data can be replicated to a number of locations. Available options are:
1. 2. 3. 4. No - No restrictions are set to distribution, therefore the scheduler job is distributed to all locations which are members of the distribution groups set up in the Job Receiver Group for the scheduler job. Include List - An include list is/should be set up for the scheduler job. The job is only distributed to the locations in the Distribution Include/Exclude List. Exclude List - An exclude list is/should be set up for the scheduler job. The job is distributed to all locations, except to the locations in the Distribution Include/Exclude List. Single Location - The job is distributed to one and only one location. You can select the location in either the Include List window, or by selecting the location in the To Location Code field in the Scheduler Job window.
For running the object replication, a scheduler job type must be set up, running codeunit 99001463, Object Distribution and select any option in the Distribution Restrictions field. To run any Process Only Report, dataport or to run a codeunit, scheduler job type can be created with/without defining object type and object no. If not defined at scheduler job type, object type and object no. can be defined while defining the scheduler job.
Chapter 5 - Setting
up Replication
44
2. 3. 4. 5.
Press F3 to create a new scheduler job type. Fill in the Type Code and Description fields. In the Object Type select whether a Report, Dataport or Codeunit needs to be run with that scheduler job type. For example Post Inventory Cost to G/L needs to run periodically, which is a process-only report. In the Object No. field, select the relevant object.
Another way to insert default Scheduler Job Types is to insert default data from the Retail Setup. Scheduler Subjob The Scheduler Subjob contains information about tables and fields that need to be defined to replicate between databases in a scheduler database. A scheduler database may exist within the source database or destination database or may be a separate database. The Scheduler database is the database containing complete information required for replication, such as distribution locations, tables to replicate, information of source and destination database. The Scheduler Subjob is used to define information necessary for replication, such as tables to replicate and the replication method. Defining Source and Destination Table For each scheduler subjob, the source table and destination table must be selected. Source table id can be defined in From-Table ID and destination table can be defined in To-Table ID.
Defining Replication Method The Replication Method is selected mainly by the type of table that needs to be replicated. The Replication Method Normal with replication counter does not support delete action, so it should not be used with master tables since master tables can have a delete action. Normal with Replication counter can be used with transactional tables because these tables do not have delete action. Master tables should be replicated with the By Action method. Setup tables will not be replicated very frequently and will not have a high number of records so they can be replicated by Normal method.
Chapter 5 - Setting
up Replication
45
Defining Field List A restriction list determines which fields should be included or excluded from replication which can also be defined in scheduler subjob using Transfer Field List option at Subjob on right bottom corner of scheduler subjob and the Field Transfer Type should be Include List or Exclude List. Once the field list is defined, the checkbox Transfer Field List Exist will be automatically enabled. Linking Tables Tables can be linked to replicate data with main tables using the Linked Tables option from the Subjob button and link between main table and linked tables can also be defined. For example transaction entries tables can be linked with the Transaction header table. Once the linked tables are defined, the checkbox Linked Tables Exist will be automatically enabled. Applying Filters Filters can also be applied on the data to be replicated between databases using From-Table Filters options. Once the filters are defined, the checkbox From/To Table Filters will be automatically enabled. Defining What To Do This field What to do specifies what the replicator should do when updating data in the To-Table ID field. This field only applies if the Replication Method is Normal. If the replication method is By Actions, this field is not relevant. In the lateral case, the Action field in the Actions table contains the information about what to do.
FIGURE 5.2 -3: SCHEDULER SUBJOB There are 9 options: Blank: Nothing is specified. Update: Only changed records in the From Table are updated in the To Table. Add: Only new records in the From Table are added to the To-Table. Update-Add: Only changed and new records in the From Table are updated in/added to the To Table. Delete: Only records that do not exist in the From Table anymore are deleted from the To Table. Update-Delete: Changed records and non-existing records in the From Table are updated in/deleted from the To Table. Add-Delete: Only new records and non-existing records in the From Table are added in/deleted from the To Table. Update-Add-Delete: All changes, deletions and additions to the From Table are copied to the To Table. The resulting tables are identical. Add-Only: Only new records are added to the To Table:-. If the record already exists the program will return an error message and will not replicate the scheduler job.
Chapter 5 - Setting
up Replication
46
Note
It is extremely important to select the right option in this field. For example, it is safer to use the AddOnly option when replicating entries from store to head office than using the Add option. The reason is that if the number series for the records being replicated, overlap for two distribution locations, the AddOnly option will return an error, while the Add option will not.
Normal with Replication Counter supports only the Add, Add-only and Update-Add methods. Scheduler Job The Scheduler Job is the key to Data Replication. Scheduler Job is used to define the what, how, where, when of replication. Scheduler Jobs are used to define activities to be run at periodic intervals. It can be report, codeunit, batch but most importantly to replicate table data between locations. These Jobs can run at predefined intervals in the scheduler or on click of Run Now.
FIGURE 5.2 -4: SCHEDULER JOB Defining From and To-Location For each scheduler job, from-location and tolocation can be defined using the From-Location Code and the To-Location Code. If data is to be replicated to a number of locations, the same can be achieved by defining include or exclude list. Distribution Restriction should be defined as Include List or Exclude List as required. Distribution restriction is defined at scheduler job type and by default the same value will be copied to scheduler job on selection of scheduler job type and can be modified at scheduler job. If Distribution restriction is No, then to-Locations can be defined using the Receiver Group option in Job. The Receiver group may include one or more distribution locations. If Distribution restriction is Include or Exclude List, then to-Locations can be defined using the Distribution Location Include/Exclude List option in Job. If Distribution restriction is Single Location, then the Job is distributed to one location. The location can be selected either in the Include list window or by selecting the location in the To Location Code field in the Scheduler Job.
Chapter 5 - Setting
up Replication
47
Defining Object type and Object No. Object type and Object Number must be defined at Scheduler Job. Object type and Object No. are filled in on selection of scheduler job type and can be modified at Scheduler Job. On running the scheduler job, the object will run. Object type can be report, dataport or codeunit. Defining Frequency to run Scheduler Job How often a job should run can be defined using Time Units and Time Between Checks at Scheduling Details tab of Scheduler Job and any restrictions depending on the weekday or time of day can also be defined at scheduler job. Defining Job that is not for replication If the job is not for data replication, the Uses job Scheduler record field should not be check marked. It is to process some report or dataport or to run any codeunit which is not replicating data. Linking Subjobs in Job Scheduler subjobs need to be defined in the Scheduler job lines in the lower half of the window. If multiple jobs having the same subjobs need to be defined, same can be achieved by creating one job with required subjobs. While defining the other jobs, job defined can be attached in field Subjobs defined by job which will automatically fill up the value of subjobs under that job. Defining Job Sequence When replicating with the Data Director it is possible to configure the order in which the subjobs are processed. This controls the order in which tables are read from the database and also in which order they are updated at the destination sites. Sometimes it is important to control in which order tables are locked on the destination site to prevent a deadlock situation. This is configured on the Data Replication tab in the Scheduler Job Line. The column Lock Seq. (by default hidden) can be used to set up the sequence number of execution of the subjob in a Scheduler Job. Error Handling This field provides the options how the Scheduler should handle errors that occurred while running the job. There are three options:
1. 2. 3. Skip To Next Run: The program will skip running the scheduler job till the next time it should be run (according to the time interval), Mark With Error and Retry: The program will mark the scheduler job with error and retry data exchange. Mark With Error and Stop: The program will mark the scheduler job with error and stop running it.
Sublocations Restrictions While replicating data, sublocations of distribution location being defined in To-Locations should be included in replication or not can be defined using Distribution Sublocations field at Scheduler Job. It has three options:
1. 2. 3. Excluded from Replic.: Distribution Sublocations (POS terminals) are not included when running the scheduler job. Included in Replic.: Distribution Sublocations (POS terminals) are included when running the scheduler job. Only Included in Replic.: Only Distribution Sublocations (POS terminals) are included when running the scheduler job, not the Location (store) they are located in.
Available Miscellaneous jobs The objects in the following list are available in LS Retail and can be run as miscellaneous jobs in the Scheduler at defined intervals. It is important to read through the list to check which jobs need to be set up in the company.
Chapter 5 - Setting
up Replication
48
Object
Description
Used to delete actions and scheduler logs in both head office and store databases. This job needs to be run regularly in the relevant databases to manage database space properly.
Codeunit 99001461 Statistics Utils Codeunit 99001560 PreAction -> Actions Codeunit 99001453 Transaction Archiving
Create Statistics records from transactions. Creates Actions out of Preactions. Used to Archive POS transactions. It copies POS Transactions into archived transactions and deletes them afterwards from the POS transactions. Used to block staff that have todays date or an older one in the Date to be Blocked field and are yet to be blocked.
No No No
No
Codeunit 10001321 InStore Batch Process Codeunit 10001416 Retail ICT Processes Codeunit Update Statuses 99001468 Remote
No
No
Updates the remote status of the Data Director Message Tables. You can administrate remotely some parts of the Data Director messages. Creates needed Labels
Yes
Codeunit 99001481 Label Utility Codeunit 99008909 POS Transaction Server Utility
Yes
No
Chapter 5 - Setting
up Replication
49
Defining Rules of Priority in Scheduler The Scheduler runs scheduler jobs according to the next scheduled date and time of the job. The priority rules are as follows:
1. 2. Scheduler jobs with the Run Status Special Order have first priority. These are jobs that are processed by the function Order Run of Scheduler Job from the Scheduler. Scheduler jobs that have the Run Status With Error or Stopped with Error have second priority. These are jobs that are run with errors. The scheduler tries to process these jobs every other time.
In each process, the Scheduler will run one scheduler job of first priority, one of second priority and then one job with a blank option as a Run Status.
5.3
The interval to run particular job is defined in Scheduler Job. To run a scheduler job automatically after the interval has been defined in the job, there are two methods available which check the frequency of scheduler job and initiate the process of replication by sending instruction to DD client. One is LS Scheduler for which a form (Scheduler Server) needs to be kept in run status which will check for the jobs to run and another is using Navision Application Server which is run as a windows service and will check for the scheduler jobs to run and will run those jobs.
5.3.1
LS Scheduling
The Scheduler Server window is used to set up and start the scheduler server if the server is to run scheduler jobs. To Divide the Load of Replication Jobs
1. If a number of replicators are used, the load of replication jobs can be distributed between the replicators. While setting up a number of replicators, it is easy to set up the Scheduler for one replicator, as one scheduler job type should be created for each replicator and assign the job types to scheduler jobs in the scheduler. One scheduler job type should be assigned to each scheduler server. If no scheduler job type is assigned to scheduler server, it will replicate all the scheduled jobs. The Scheduler combines all scheduler jobs in the system, whether they are replication jobs or miscellaneous.
2.
To View the Scheduler Log How the scheduler jobs are processed can be viewed in Scheduler. Whether the jobs are run successfully or any error has returned, can be checked from Scheduler Log option in Server.
5.3.2
NAS Scheduling
Install Navision Application Server (NAS) Navision application server (NAS) comes with Microsoft Dynamics NAV installer. To schedule jobs using NAS, first of all NAS need to be installed on the system on which scheduler should run. While installing NAS, a name should be assigned to the server which will be used for Microsoft Dynamics NAV Database Server and SQL Server for Microsoft Dynamics NAV.
Chapter 5 - Setting
up Replication
50
Configure NAS Service After installation, using NAS Service manager (it can be done using Microsoft management console), one snap-in should be created for NAS Service where as host name should be the computer name where NAS is installed and Service name should be the same name being defined while installing NAS.
FIGURE 5.3.2 -1 It will open the Service identification window as shown below
FIGURE 5.3.2 -2 Enter the host name as localhost and Service name as NASNAV or NASNAVSQL being defined for NAS service. If there is a need to connect with NAV database server, take NASNAV and for SQL database, take NASNAVSQ.
Chapter 5 - Setting
up Replication
51
The following should be the parameters being defined for Snap-in Property Database Server Name Database Company Name Setup parameter Net Type Object Cache Size Value NAV Server name or SQL Server name Database name (If NAV server is directly connected with database, then it is not required). In case of SQL need to define database name. Name of the company in database LSRSCHEDULER DD_Server,Typefilter=DD,Log=1 DD_Server is the service name of Data Director and typefilter is the scheduler job type. If typefilter is left blank, it will include all scheduler job type. TCP/IP in case of NAV and Default in case of SQL Default value will be 8000 and can be changed. TABLE 5.3.2 -1 NAS CONFIGURATION
FIGURE 5.3.2 -3
FIGURE 5.3.2 -3
Chapter 5 - Setting
up Replication
52
FIGURE 5.3.2 -4 To Start NAS Service NAS Service can be started by pressing the Start in Microsoft Management Console or windows service can be started in standard windows utility. Note
NAS service should run in network account. In windows service utility, On Logon tab, Logon account should be network account.
FIGURE 5.3.2-5
Chapter 5 - Setting
up Replication
53
To Enable NAS Scheduler In Scheduler Setup in LS Retail, NAS Scheduler should be enabled by clicking on the checkbox of Enable NAS Scheduler.
FIGURE 5.3.2 -6: SCHEDULER SETUP To Define Windows user in Database A windows user should be defined in Tools Security Windows login and the required permissions should be given to that user.
FIGURE 5.3.2 -7: WINDOWS LOGIN Roles of super (Required roles) should be assigned to the user.
Chapter 5 - Setting
up Replication
54
FIGURE 5.3.2 -8: USER ROLE Now start the NAS Service NASNAV and it will start the replication of scheduled jobs. Since NAS is running as a windows service, it will continuously check for the scheduler jobs to run and will run the corresponding jobs. This service will only run the jobs having scheduler job type being defined in typefilter of Setup Parameter and will run using the DD Service being defined in Setup Parameter.
Chapter 5 - Setting
up Replication
55
5.4
Replicating Objects
LS Retail has distributed architecture in which there are different databases at different locations and if there is any structure change in one database, it needs to be replicated to all the stores. In a production environment it is very difficult to replicate the changes to all the databases. For Example - If there are 100+ stores, it is almost impossible to update all the stores manually. LS Retail has a quite efficient way to handle object replication. Microsoft Dynamics NAV contains three types of tables: data tables, system tables and application object table. System tables cannot be replicated but the other two types of tables can. Replication of data tables is the regular data replication as discussed. The Object table is just a regular table which contains the application objects as a BLOB field. This means that the replication can be done if required permissions are available. Objects are stored in a system table 2000000001 (Object). Object replication is basically handled as replication of data from table 2000000001. To Define Scheduler Job Scheduler job should be created with the Job Type Object Replication. The Job should use the object type Codeunit and object number 99001463 (Object distribution).
FIGURE 5.4 -1: SCHEDULER JOB To Define Scheduler Subjob Scheduler Subjob should be created with from and to table id as 2000000001 (object), the replication method By Action and action table id as 99001509 (Actions) and should be defined under scheduler job used for object replication. To Define Object Transfer Header Object Transfer Header in the LS Scheduler module is used to define the objects to transfer between databases. Job ID should be defined in object transfer header which will be used to replicate objects. In object transfer lines, different objects to replicate need to be defined.
Chapter 5 - Setting
up Replication
56
FIGURE 5.4 -2: OBJECT TRANSFER HEADER To Confirm Object Transfer After defining the objects transfer, it should be confirmed using Confirm Object Transfer in Functions or by pressing F11. Note
Object Transfer using this method works only with the Data Director. Limitations - Object replication is technically similar to importing text files and run into the same limitations: Cannot create new Dynamics NAV standard fields/objects Cannot change permissions to objects which dont have access to (for example -Codeunit 22) End user licenses cannot create new objects, add fields or modify permissions. This makes the end user licenses a bit annoying.
Resolution Developers licenses can be used while updating objects The Data Director also picks up a new license automatically for object replication if placed in the CFront\Incoming directory. From the head office, licenses in the store can be switched both manually and automatically Object replication can then be done like this:
o o o From the head office, send a developers license to the stores Run the Object Replication job Change to the end-user license
This process can be automated via the Scheduler. By doing so, no license agreement is violated.
Chapter 5 - Setting
up Replication
57
5.5
Replicating Files
Using Data Director, files can also be replicated. To Allow File Transfer The Allow File Transfer field should be check marked on DD service configuration. After checking the Allow File Transfer, DD Service should be restarted. This field allows file transfer to a remote PC running the Data Director. This allows transferring a developers license to the remote location to update objects and switch back to the end-user license when the transfer is complete. Note
Any file can be transferred using this method. This can be a security risk.
To Transfer File File transfer can be done manually using the Data Director Administration Setup. Same can be done automatically by running the DD File Transfer codeunit (99001465) Advantage can be taken of the #DD_HOME variable to point to the DD installation folder DD_HOME#\Plugins\FinPlugin\Incoming\fin.flf To Define Scheduler Job To achieve the same, scheduler job can be created with object type as codeunit and object no. 99001465 with job type as custom. Example
To replicate a license file, path of license file for source and destination location can be mentioned in Text field on object setup tab of Scheduler Job like: SRC=c:\temp\fin.flf,DEST=#DD_HOME\Plugins\FinPlugin\Incoming\fin.flf
This method is used to place the file fin.flf in the Incoming folder of the FinPlugin. When the DD starts processing a job, it will look in this location for a new license to use. This can be done manually. The same method can be used to switch back to the end-user license.
Chapter 5 - Setting
up Replication
58
5.6
Transaction server is a different mode of Data Director which is interactive. When running in Transaction Server mode, the Data Director behaves differently from the regular Data Director mode. In Transaction Server mode, the functionality is greatly reduced and a log database is not required. The Transaction server works as a service which reads the data from source database and takes that data to the destination database; creates the connection with destination database and writes the data into destination database. Using transaction server, it is not possible to have different services to read data from source database and write data into destination database. The Transaction server does not create packages of data and this is the reason why the transaction server does not use a log database to keep track of data packages. The Transaction server is an interactive mode of Data Director which can make queries to remote database and can get the results. Transaction server is able to send data to remote database. The main reason for running the Data Director in Transaction Server mode is to allow the LS Retail POS to communicate with remote databases. This allows the POS Terminals to look up the data such as customer balance and stock availability from a central database. The Transaction Server allows the LS POS to make queries to the Back Office database when the LS POS is configured to run offline. Before the transaction server functionality can be activated on the POS, LS Data Director service is required to run in transaction server mode on the same network. The Transaction Server can also be used over a Wide Area Network (WAN) but the performance will be slower. It can be used for special queries. If at any point of time, the transaction server service is not working, the information of the transactions which are to be sent to a remote database is kept in the source database in table 99001615 Trans. Server Work Table. When the connection is re-established using the transaction server service with a remote database, it will send the pending transactions to the remote database first. For example - When the user logs on the POS or posts any transaction and finds the transaction server working, pending data will be sent to remote database.
Chapter 5 - Setting
up Replication
59
For example in the below diagram, there are two transaction servers (TSs) added; one at the central database and one at the Navision node.
Navision
DD Central
DD
MS SQL
LS Retail
LS POS.Net
TS
TS DD Navision
Scheduler
LS Retail
FIGURE 5.6 -1: TRANSACTION SERVER ARCHITECTURE
Chapter 5 - Setting
up Replication
60
In this example, the LS Retail system at the node (may be POS or store) can open up a connection to the central database and make queries as if the central database was local to it. This has an advantage of being faster than to connect remotely using the normal Microsoft Dynamics NAV method. Similarly, all POSs connect to the local database through the TS saving the concurrent number of users which need to be available since the TS handles all the requests through one connection. Whenever immediate online information is needed or needs to be updated, the TS should be used. Prerequisites for transaction server
1. 2. One DD Service should be created in transaction server mode. Setup needs to be done at LS Retail - POS, Profiles, Functionality; select the Trans. Server tab.
FIGURE 5.6 -2: FUNCTIONALITY PROFILE Different options for selecting the type of transactions and the remote server with which the transaction server should communicate are defined in the table below. To activate a function, place a check mark in front of the relevant line and select a distribution location that contains all the necessary connection parameters that the Transaction Server requires in order to process the request. The distribution location is the remote server to which that type of query should be made. Note
This must be done on each POS (if the POS is offline) that will use the Transaction Server, and distribution location for the remote database should be defined with distribution server as transaction server service.
Chapter 5 - Setting
up Replication
61
Transaction type
Description At the end of a sale, when a transaction is posted to the POS database, it is also pushed to the remote database defined in the send transaction server. If the remote database is unavailable, the transaction will only be stored in the POS database. When the remote database becomes available, all unsent transactions will be sent at once. Please note that the system does not detect when the remote database becomes available. This will be triggered when connection will be created with remote database using transaction server. At the end of voiding the transaction, transaction is posted to the POS database; it is also pushed to the remote database being defined in void posted transaction server database. Suspend - When the suspend button is pressed on the POS, the POS Transactions and related lines are sent to the specified remote database. Retrieve-When the cashier requests a list of suspended transactions from the remote database, only the headers are sent back to the local database. After selecting a specific POS Transaction from the list, the whole transaction is requested from the server. If successful, the transaction is deleted from the remote database. The transaction is now only available on the POS and can be finalized, suspended again, etc. If it is not possible to suspend the transaction to a remote database, the POS will suspend it locally. If it is not possible to retrieve the transaction from the remote database, the POS will display an error. There can be two reasons for this. One is that the suspended transaction is stored in the remote database and cannot be retrieved. In this case the customer has no other option than to rescan the items. The second reason is that the POS that suspended the transaction was unable to store it in the remote database. In this case the customer must return to the POS Terminal where the transaction was last suspended in order to retrieve it. The cashier selects the customer to check. The POS sends a request for the updated status of the customer. The results of this query are written to the POS database. From here, the transaction proceeds normally. If the customer is blocked or has exceeded his balance, the standard POS application will take the appropriate action.
Send Transaction
Customer
Since this is a query to the remote database, error handling is rather simple. If the remote database is unavailable, the POS will use the values stored in the POS database. If connecting to the database fails, it is possible to specify a secondary server for this request as backup. Create-Entries are created as before. It is important that unique initial numbers are set up for each POS. After finishing the sale, an attempt is made to send the entry to the remote database. If the attempt fails, the Replication Counter is used to mark the entry. Otherwise, if there are previous entries which could not be sent, an attempt is made to send them. If successful, the Replication Counter is set to 0 (zero). Apply-The Transaction Server attempts to fetch the entry from the server. If successful, the entry is marked with the terminal number and sent back to the
Data Entries
Chapter 5 - Setting
up Replication
62
remote database (to prevent others from being able to apply it during the sale). If unsuccessful an error is displayed, and the user is prompted to either retry, or abort. A user with manager privileges or with the value Continue on TS Error set in the Staff card retries and the attempt fails again the entry is accepted. An entry is inserted into the POS's local database where amount is set to 0, and the applied amount is the amount entered. If the entry on the server was already reserved by another POS, an error is displayed. A manager is able to circumvent this error. After posting the sale, an attempt is made to send the entry (same as above). If the entry has 0 in the amount field, the entry is read from the server, and the Original Amount value is added to the entry. If the update is unsuccessful, the entry is handled in the same way as in (a). The POS sends a request to the Transaction Server, asking for the status of the staff member. The Transaction Server copies the corresponding staff record to the POS database. Once the staff record has been copied, the POS application will process the data locally. The POS application will first check whether the staff member is blocked. If so, the staff member cannot log on to the POS. If the POS is unable to get answers from the Transaction Server, the POS uses the Staff information in its local database to determine whether logon is allowed or not. The POS sends a request to the Transaction Server, asking for the status of the floating cashier. This field is used to get Inventory Lookup with Transaction Server. Inventory Lookup The command in question does not work in the normal sales menu. It only works in the lookup menu, and only so when an Item lookup or Variant lookup is done. One thing need to make sure that a special Lookup table is updated for this purpose in the head office Database (or the one where the query will be made). For which product groups the Lookup should work need to be specified and after doing so, it need to be enabled for each store. This will trigger an update on the table Lookup table. The Lookup table should be updated regularly, either manually or automatically (from the Scheduler), at least each time a new Item, or store is added. A check mark in this field indicates that the system will use the Transaction Server and the location in field Login Staff for login/logout from POS in the Time Registration Module. A check mark in this field means that the Transaction Server Serial Number check is enabled for this POS. Transaction server will check for the validity of the serial number entered at POS in the remote database. This field contains the ID of the distribution location to or from which Loyalty points are sent. If this field is enabled, it means that the loyalty points will be sent using transaction server to remote database. A check mark in this field means that the Online Transaction Backup is in use for this POS. Transaction server will keep the backup of transactions using transaction server in remote database being mentioned in online trans. Backup server. A check mark in this field means that the information required for cash management is retrieved from remote database server using transaction server. TABLE 5.6 -1 TRANSACTION SERVER SETUP
Staff validation
Floating Cashier
Chapter 5 - Setting
up Replication
63
5.7
5.7.1
Data Distribution
About Data Distribution
Data Distribution offers the chance to distribute the desired data in stores. The Distribution allows the control of the distribution of data from head office to the stores. For example, the distribution of offers can be defined in such a way that certain offers are only available in a certain number of stores. In order to set up the Data Distribution, the following setup is required:
1. 2. Table Distribution. This is used to specify the Data Distribution type used for individual tables in the system. Distribution Groups. This is used to specify how stores and POS as well as other Distribution Locations are grouped together. A distribution group can be created for each city the stores are in or distribution groups for chains of stores. Distribution Locations. A Distribution Location is a place to define the connection parameter of a database. By default, these are the head office, stores and POS terminals or other databases as well. To send data into or read data from database, a setup needs to define parameter like the Database Server, access information, the network connection and other related information. Distribution List. This is used to define where individual records are distributed.
3.
4.
The Data Distribution should be one of the first things to be set up while setting up a new system. The distribution must be set up before users start entering data into the system. Data entered before the distribution is set up may or may not be distributed correctly. Distribution setups can vary from one organization to the other. A chain of supermarkets would probably need a full setup for all tables used in the system. A small single location store would need a simpler setup, since it does not have to distribute data to other stores. For example - Distribution for an Item on change of description of item. The system detects that a change is made to a record in the Item table. The system writes an entry in the Preaction table (code is written to generate entry in Preaction table in different triggers of Item table), stating the ID of the record, the number of the table (the Item table is number 27) and that the change was in the form of an update to an existing record.
FIGURE 5.7.1 -1: PREACTIONS The PreAction->Action (99001560) codeunit is run. The codeunit will see that a change has been made to a record in the Item table from Preaction table. The codeunit will look at the Table Distribution for the Item
Chapter 5 - Setting
up Replication
64
table, to find out how the Item table should be distributed. Depending on the table distribution, the program will find how the record should be distributed. Once the distribution of the record is found, the program will insert entries into the Actions table for each distribution group and subgroup that the record has.
FIGURE 5.7.1 -2: ACTIONS The whole idea behind the table distribution, the groups and the actions is to have a simple way to find out which data should go where. The concepts related to distribution may seem complicated, but there is no need to worry. Once the distribution has been set up, it requires almost no maintenance or intervention on the users behalf.
5.7.2
Distribution Groups Just as store groups need to be defined in LS Retail, distribution groups also need to be defined in the system for the distribution of data. Store group is the logical grouping of stores and distribution group is the grouping of the distribution locations for replication. Distribution groups are used to group stores together. By grouping stores the task of assigning distribution to records become easier, since there is no need to specify the distribution for each store, only for a group of stores. Distribution groups are similar to mailgroups. Once the mailgroup is created, recipients can be assigned to it. The same goes for distribution groups, but instead of assigning recipients stores are assigned. To distribute data, it is very important to decide the grouping of stores on the basis of data distribution. A store group consists of a distribution group and subgroup. The group can have one or more subgroups. Stores can be assigned only to one distribution group, but these can be members of many subgroups. Store Group The Store Group table is used to set up store groups. Store groups are used to group together stores based on some common characteristics, such as size, location, and product range. This allows various common settings to be defined for a group of stores instead of for individual store. This applies to connecting stores to Distribution Groups, Inventory Management distribution and so on. Store group can be mapped with distribution group or with subgroup of any distribution group. When starting to use of LS Retail and before starting data entry in the program, it is necessary to create one store group that includes all stores. One store group can be created marked No Filter. This store group is then connected to its corresponding Distribution Group.
Chapter 5 - Setting
up Replication
65
Example
ACME Inc. runs a chain of hardware stores. ACME has stores in Washington, New York, Seattle, Los Angeles and San Francisco. Distribution groups for ACME need to be created. A distribution group named ACME can be created. Within that group create two subgroups, EAST and WEST. Once these are created assign the stores to group ACME. Furthermore, assign the Washington and New York stores to subgroup EAST and the Seattle, San Francisco and Los Angeles stores to group WEST. Now data can be distributed to New York and Washington store by selecting group ACME and subgroup EAST (ACME->EAST). ACME->WEST can be used to distribute data to Los Angeles, San Francisco and Seattle. Data can be distributed to all the stores by selecting group ACME and no subgroup. If data need to be distributed to one store only - Seattle, select ACME->SEATTLE. Create subgroups NORTH and SOUTH, and assigned stores to them. These groups can coexist with subgroups EAST and WEST because there is no limit on the number of subgroups within a group. The same goes for the stores (or members). These can be members of many subgroups but only one group. This selection method is used when distributing data with the distribution list. Once the record to be distributed is selected, simply fill in the values for the group and subgroup.
The system expects one group to represent all stores within the organization. This group should have one subgroup as well. These groups are usually named All and referred to by All->All. This group and subgroup are a part of the default values for LS Retail. Actions and Preactions Actions and Preactions stand for entries in Action and Preaction tables. The purpose of these tables is to: Keep track of changes made to data in LS Retail. Keep track of how the changes should be distributed. The difference between these tables is that the Preaction table only knows which data has changed. The Action table knows which data has been changed and also how it should be distributed. Every time a user changes data in LS Retail, a log of the changes is written to the Preaction table. Change means creation; updating or deletion of a table in LS Retail is logged in the Preaction table. This allows the system to keep track of which records have been changed and when. Note
The system knows only which record has changed. It does not know which fields in the record have been changed.
Chapter 5 - Setting
up Replication
66
The Preaction table tells which records have changed. But there is need to know how the changed records should be distributed. Should they be sent to all stores or just a single store? In order to find this out, Preactions need to be converted into Actions. This is done by running the codeunit PreAction->Action (99001560). By running this codeunit, the system will convert all Preactions to Actions, giving a list of all changed records in the system and also telling how they should be distributed. System keeps the track of preactions being converted into actions in Action Counters table. On the basis of value in this table, it converts only new generated preactions into actions. Note:
Scheduler job can be created By Action method with Action Table ID as Preaction or Action table, if Preaction table ID is set in Action table id, data will be replicated to all the locations being defined in include list or to stores in store group being defined in receiver group at scheduler job because Preaction table is having the information of records being modified but does not contain the distribution information. To distribute data location wise, Scheduler Job must be created with Action table in Action Table ID.
While creating a new scheduler subjob using the method By Action will be created with action table id as preaction or action depending on the value in Replicate using Preaction in Scheduler Setup. If this field is check marked, the subjob with the method By action will be created with the Action Table ID as Preaction and otherwise Action. Action table ID can be changed manually. Default Distribution in LS Retail Table Distribution controls how individual tables in the system are distributed. Note that the table distribution says nothing about where the tables are distributed, only how they are distributed. LS Retail consists of many different tables, which serve different purposes. How to distribute a table depends on the functionality of the table. For example, the Gen. Product Posting Group table is a table that should be distributed to all locations. The Item table might have a more selective distribution, depending on the selection of items in the stores. Records in the Store table should only be distributed to the store it represents. These three tables would have a different Table Distribution, since their distribution follows different principles. The Table Distribution is closely linked to the Distribution list. The table distribution is a setup table where the type of distribution of different tables is defined whereas in Distribution List the records are created for replication of records of tables on the basis of Table Distribution setup. When there is any insertion, modification or deletion of record in a table, data is entered in Distribution list table which tell the system how to replicate this record. The Distribution list is used to tell the system how individual records should be distributed. Note the difference between Table Distribution, which dictates how tables are distributed, and Distribution List, which dictates how records are distributed. The Distribution List is used only where to specify distribution based on records. The different types of Table Distribution are: All Specific By Master Only Table Default No Distribution All Default All - This option tells the system that the table should be distributed to all locations. The distribution list for this table will be empty, since the system can find out the distribution for all records in the table by looking at the Table Distribution. In the example above, the Gen. Product Posting Group table should have table distribution type All.
Chapter 5 - Setting
up Replication
67
Specific - This option tells the system to specify distribution on record level. In this case, the user has to fill in the distribution list manually for each record in the table. In the example taken above, store table should have table distribution type Specific. By Master Only - This option allows to link tables together. In many cases, it can be convenient to link the distribution of one table to another. For example, if Items and Barcodes for the items should have the same distribution, then the Barcode table can be linked to the Item table so that all barcodes related to an item will get the same distribution as the Item. This is a very powerful feature because it allows the user to link many tables together and only specify the distribution on the top level. This kind of distribution is used in the default settings that come with the system. When By Master Only is used, the user must define the link between the tables. This is done through the Table Links field in the Table Distribution window. In our previous example the Barcode table would be linked to the Item table by linking the Item No. field in the Barcode table to the No. field in the Item table. Some kind of relation must exist between the tables when this option is used. This option cannot be used if there is no logical relation between the tables and tables cannot be linked by this method. For example the Item table cannot be linked with the Customer table, since there is no direct table relation between those two tables. While linking tables, it is preferable that the linked fields are parts of the primary keys of the linked tables. This is not absolutely necessary, but will improve the performance of the system a lot. Table Default - This option is reserved for four tables in the system: Price Group Store Ordering Information Shelf Label Setup Item Label Setup The user is advised not to change the distribution of these tables. This distribution option cannot be used for other tables in the system. No Distribution - This option means that the table will not be distributed. Actions for this table will not be created. All Default - This option is a combination of All and Specific. When a record is created for a table with the distribution type All Default, the system will create a distribution list entry that tells the system that the record should be distributed to all locations. However, the user can modify the distribution for the record afterwards. The different types of table distribution are listed. How to create groups of stores for distribution and how to assign distribution to records with the distribution list has been discussed in distribution group. The system comes with a predefined Table Distribution that should be appropriate for most organizations. Changes to the default table distribution settings should only be done by advanced users. When there is any change in master table and action is generated for record in master table, then action can be automatically generated for linked tables by clicking on Actions on Linked Tables, Linked actions on Insert, Linked actions on Modify and Linked actions on Delete. For example with default distribution setup, action can be created for barcode or variant table on action creation for item table records.
Chapter 5 - Setting
up Replication
68
Location Distribution To distribute data location-wise, distribution location, distribution group or store group need to specify with every record. Store group or distribution group are again mapped with distribution location. This linking can be done using the setup defined in Table Distribution Setup. For example While creating a new item, price group can be attached with that item. Price group is further linked with Store group and store group is mapped with distribution location through distribution group and distribution subgroup. System creates actions using this mapping for the corresponding locations/distribution group and distribution subgroup. If there is need to define distribution location for each and every record separately, the Location Distribution can be used. Location Distribution is used to set up and/or view distribution setup for a record in a specific table. For example - view distribution for a specific item or a price group in this window. Distribution affects the system by dictating the replication of data from one Distribution Location to another while replicating with the replication method By Actions. When replicating to a location, the replication process looks only at actions for a record that has Location Distribution that includes the location being replicated to. The process checks whether the Location Group Filter of the action is a part of the Distribution Group Code for the Scheduler job and accordingly replicates the data. If the table's Distribution Type is All Default, Master Default or Table Default, the program creates default distribution for the record after it is inserted and necessary fields are filled in the record. It creates an entry or entries in this table for the table record, depending on the table's distribution type. The default distribution for the record can be changed. If the distribution type is Specific, specific distribution for the record should be defined; the program does not create any distribution automatically. If the table's distribution type is All or By Master Only, the program also creates an appropriate distribution for the record in the table. This distribution cannot be changed here. Note
No record can be entered into the Distribution List table if the table involved has no entry in the Distribution Setup table or if its distribution type in the setup is All, By Master Only or No Distribution. When an entry is created in Location Distribution table, the program creates an Action with Table No. as the Table ID and Primary Key as Value. The Action's Location Group Filter is the Distribution Group Code and Distribution Subgroup Code combined.
When an entry from the Location Distribution table is deleted, the program creates a Delete action in the same way as above. When an entry is added or deleted in the Distribution List table for a table that is the Master Table ID for other tables which have the Distribution Type By Master Only or Master Default, the program performs the same changes to the Distribution List entries for these tables. The field Default Distr. Chg. Mess in the Scheduler Setup table controls whether the program shows a confirmation message window when changing the distribution of a table with the Distribution Type Master Default. When the distribution type is By Master Only, the program performs exactly the same changes as the location distribution needs to be the same for the Master table and the tables underneath. The purpose of this field is to be a reminder of the location distribution structure chosen, when starting to use the system. It is not practical to have this field marked in a large database where location distribution of the type Master Default is used. Be aware of the consequences of deleting an entry in the Distribution List table for an item. If data is replicated from head office to stores, deleting a distribution list entry causes the deletion of the item in the databases of all locations included in that location distribution list. If the item has already been sold in the store, that is, transactions and/or ledger entries exist for the item, blocking the item can be considered in
Chapter 5 - Setting
up Replication
69
the store instead of changing its distribution. Otherwise the replication process deletes the item and leave entries behind for a non-existing item. If not replicating data, changing location distribution on sold items does not cause problems. Troubleshooting The test connection for distribution location of source and destination should be successful for replication of data. If data needs to be distributed location wise, scheduler subjob must have the replication method as By Action and Action Table ID must be Action. If the scheduler subjob is created with the action table id as Preaction, then data will be replicated to all locations being defined in include list/receiver group. If the structure of source and destination tables is different, design must be read for the distribution locations and also the transfer field list must be defined with From and To-Location. In production installations, it is suggested that a Microsoft Dynamics NAV database should be used for log database. Scheduler subjobs with different replication methods (Normal and By Action) should not be mixed in one scheduler job. Different jobs should be created for scheduler subjobs with different replication methods. This improves the performance. Sometimes it is important to control in which order tables are locked at the destination site to prevent deadlock situation or in which order tables are read from the database and also in which order they are updated on the destination sites. The Lock Seq. field should be used to define the sequence number of the subjob in the Scheduler Job Line. Note
This column is not visible by default. The Show Column option must be used to make it visible.
The user running LS Scheduler/NAS Scheduler should have the required permission to run different scheduler jobs. The Scheduler job should have the scheduling time interval set. The Navision Application Server should be run with network user account and same network user should be created in Microsoft Dynamics NAV database at same location. While creating a scheduler subjob for a table in which there will always be very high number of records, then to decrease the size of packet, Action counter interval and Repl. Counter Interval field should be used in case of action or normal with replication counter. For example if a table consists of 1,000,000 records and this table is replicated using normal with replication counter, the size of packet will be very big and will take time in replication. The same can be achieved by defining Repl. Counter Interval as 10000. If data of a table is replicated from source location to destination location database using the By Action method and further if there is need to replicate data from destination database to further more destination databases using By Action method, then the Move Action field should be check marked in the scheduler subjob for that table so that the system will move actions/preactions according to Action table ID. For example To replicate data from head office to stores databases Move Actions should be check marked if data is to be sent from stores to POS databases using the By Actions method. The Uses Scheduler Job Record field at scheduler job should not be ticked if a job is not replicating data or the job is process only like to run any codeunit, report or dataport.
Chapter 5 - Setting
up Replication
70
Summary This section explains how to set up Data Replication in Microsoft Dynamics NAV. For setting up replication you need to set up Scheduler Job Types, Scheduler Subjob and Scheduler Job. Replication can be scheduled either by LS Retail scheduler or Navision Application Server (NAS). The user can also replicate the Objects and Files (License). This section also explains store group wise replication. Following are the topics covered in this section: Setting up Distribution Location Distribution Location and Distribution Sub Location Setting up Scheduler Job Scheduler Job Type, Scheduler Subjob Setting up Scheduling LS Scheduling and NAS (Navision Application Server) Replicating Objects Replicating Files Replication using Transaction Server Setting up Data Distribution - Distribution Group, Store Group, Actions and Preactions, Table Distribution and Location Distribution
Chapter 5 - Setting
up Replication
71
Administration
Data Director is used to replicate data between different locations. Data Director replicates data in packages. Packages need to be tracked and if data is not delivered to destination successfully, there is a need to find out the status of a package and reason of the problem if any. Using the Data Director Monitor Data Director Monitor is used to view the usage of the Data Director Service or what the service is doing at present. With the help of the monitor, the user can view which package is currently being processed by the Data Director Service. The user can view the usage of one service at a time using Data Director monitor. The Data Director Monitor creates a stack of the activities being done by a particular Data Director Service. The Data Director Monitor can be run from Data Director Administration in the LS Retail Scheduler menu, where the Data Director Service name and the monitor port can be defined in server info (Server and Port) fields. Click on the Connection on/off button to view the workings of the Data Director Service. This tool will just provide the to-do list of the DD Service after running that form. It cannot keep track of the complete history of jobs being done by Data Director Service and will provide only the list of activities which DD service has done after running the monitor form.
6.1
Data Director replicates data in the form of packages and the system maintains the complete track of packages in the Log database as explained in Package Flow. The source Data Director Service keeps track of packages being forwarded in its Log database. In the same way, the receiver Data Director Service keeps track of packages being received in its own log database. In production environment, it is very difficult to track packages in different databases at remote locations. For example - If there are more than 100 stores, then it is almost impossible to track packages in so many remote databases (100 stores and head office). LS Retail provides a tool to track the package flow which is called Remote Administration Tool. This is an interface to view the status of packages on source location as well as the status of packages at remote locations. For example If data is replicated from head office to stores or from stores to head office then the Remote Administration Tool will provide the status of packages at head office and the status of packages at stores in the head office database (or scheduler database if used). The DD Service at head office maintains the status of packages in IncomingMessages and OutgoingMessages Table of the Log database and, in same way, DD services at stores will also maintain the status of packages in the same way. The Remote administration tool uses two more tables, Remote Inc. Msg. and Remote Outg. Msg. for storing the package status of remote packages. Data from the IncomingMessages and OutgoingMessages tables of the log database for remote locations will be replicated to the Remote Inc. Msg. and Remote Outg. Msg. tables. Using this tool, the status of current location packages and remote packages are available in one database because of which administration becomes very easy.
Chapter 6 - Administration
72
Head Office Incoming Outgoing Rem. Incoming Rem. Outgoing Feed the remote administration Module
FIGURE 6.2 -1: REMOTE ADMINISTRATION TOOL To use the Remote Administration tool, distribution locations need to be defined for which the packages need to administered in LS Retail Scheduler, DD Administration, DD Administration Setup.
FIGURE 6.2 -1: DD ADMINISTRATION SETUP Once the setup is done for all the distribution locations then, by using DD Administration Card, the packages at different locations can be tracked. A codeunit, Update Remote Statuses, can be run in the Scheduler (DD Administration Card Checks-Check All Locations) to read information from the Incoming/Outgoing Messages tables from any Data Director within the whole system.
Chapter 6 - Administration
73
FIGURE 6.2 -2: DD ADMINISTRATION CARD Every time it runs, it goes through all enabled lines in this table, connects to the relevant Data Director and requests it to report the latest changes in the Incoming/Outgoing Messages tables. The latest changes are identified from the Counter field in each table, and Last Inc. Counter and Last Outg. Counter store the latest counters received. Once the remote messages are in the head office database, the user can: View the status of messages Change the status of messages Send status updates to the remote database such as resetting the Trycount Reprocess messages that contain configuration errors Remote Inc. Msg. and Remote Outg. Msg. can also be updated and the Update Remote Statuses codeunit then updates the relevant Incoming/Outgoing Messages line next time it runs. Note that the tool will not get rid of configuration errors, only help to deal with them. To update package status for remote locations using the Remote Administration tool, the setup needs to be done in the Administration Setup. For the distribution locations for which the status or value in the trycount field needs to be updated, the Update Location field should be check marked. This feature is used when Remote Incoming/Outgoing messages from stores have been read. By placing a check mark in this field, all changes/corrections made locally will be sent out to the remote location. By default, this field is hidden in the DD Administration Setup so you need to make it visible through the show column list. This could for example be a cancellation of a package or resetting of the Trycount field. The status can be updated by drilling down the incoming and outgoing messages in the remote administration tool and will be updated (replicated) to the location at the same time that the following process is started: DD Admin, Checks, Check All Locations Once a data package is cancelled and needs to resend the same package, this can be done from the scheduler log. The Scheduler log is kept in the system by Data Director with respect to every scheduler job. In the Scheduler Log form of a Scheduler Job, the log can be viewed for every replication with package no. and using the option Process Again or Process Again for location in Functions button.
Chapter 6 - Administration
74
6.2
Troubleshooting
Error Codes Following is a list of error codes, the probable cause of the error and how to resolve it. In some cases a brief description of the scenario is included. Error 0 (0x0) - Error on sending request Probable Cause The Distribution Server for the location in the scheduler has not been specified. Resolution Open the Distribution Location card for the location in the scheduler and specify the DD the location should use. Error 4096 (0x1000) - Error inserting in system tables Probable Cause Cannot write into Incoming Message nor Outgoing Message tables because they are either missing or the correct permissions have not been set for the DD user. The fields in the tables could also be wrong or missing. Resolution To make sure the correct version for DD is installed, import the NFMsgTables.fob from the patches directory where DD is installed. Check that the NAV license the DD uses includes write permission for Incoming Message and Outgoing Message tables. Check that the user the DD uses to log on to the log database has read and write permission on the Message tables. It is also possible to renumber the Incoming/Outgoing Messages tables to a new ID. This is possible since the DD uses the names specified in the Data Director Settings Tool to locate the correct tables. Error 4097 (0x1001) - Remote connection dropped Probable Cause The TCP/IP connection was terminated while the DD was forwarding a package or the receiving DD Service was shut down during transfer. Resolution Re-establish the network connection. If the network connection is slow or heavily loaded you might consider increasing the TCP/IP timeout for the sender and receiver. Error 4098 (0x1002) - Cancelled because of send and receive size Probable Cause The registered size of the packet does not match the actual size of the packet. This is most likely caused by a transmission failure. Resolution Resend the packet. Error 4099 (0x1003) - Cannot find new packet number Probable Cause The DD could not assign a new packet number to an incoming packet. This is usually caused by an incorrect System Connection string in the Data Director Settings Tool for the receiving DD. The connection string has probably specified an incorrect or invalid company name. This causes the DD to be able to log on to the database but not to select a company. Since it cannot select a company it cannot find the latest entries in the Incoming Messages table and therefore cannot assign a new packet number to an incoming packet. Resolution Correct the connection string for the receiving DD.
Chapter 6 - Administration
75
Error 4100 (0x1004) - Cannot instance a socket Probable Cause The operating system could not create a Windows TCP/IP socket. This could happen if TCP/IP is not installed on the computer. Resolution Make sure that TCP/IP protocol is installed on the computer. Error 4101 (0x1005) - Timeout expired in transaction server Probable Cause The transaction server is kept busy by other sessions. The session in which the error occurred has been waiting its turn but the waiting time exceeds the timeout configured in the Data Director Settings Tool (see the Data Director and Transaction Server Properties page). An already connected client is hanging and holding up the session. If a transaction server client is connected to the client not in transaction server mode, it is likely that the DD is working on large packages that can take a long time to process. If a test connection is run through a busy DD, this can happen. Resolution Restarting the service always clears client sessions and removes any hanging connections. If the DD is busy, the same can be tried later. In the case of a hanging session, locate the client (often a POS terminal) that causes the problem and fix it there. Error 4102 (0x1006) - HopCount has exceeded its maximum value Only applies to the transfer of packages. Attached to each package is a Hop Counter which is incremented every time the package is forwarded. If the counter exceeds the maximum value set in the Data Director Settings Tool, this error will occur and its purpose is to prevent a package from going in an infinite loop. Probable Cause A service name was incorrectly assigned to an IP address. Inconsistency between the DD name used in a Distribution Location card and the one actually assigned to the DD using the Data Director Settings Tool. They must always match for the package to be picked up. Otherwise it will always be forwarded. Resolution Check the DNS registration of the name or the entry in the hosts files (on all computers involved). Make sure the name in the Distribution Location matches the one in the Data Director Settings Tool. Error 4103 (0x1004) - Receiver has rejected the package, probably wrong password used A connection was established but failed during transfer. Probable Cause Network failure or package rejected by receiver because of a wrong password. Resolution Check if the password is correct or wait for the next retry. (Network may be unstable). Error 8192 (0x2000) - Error in processing request Probable Cause An unhandled exception has occurred while reading or writing to a database. The connection to the database is usually in order when this error occurs. Possible causes can be invalid or incorrect field transfer setup in a subjob. Resolution Check the Event Log for more information. Since this error happened on the database level, its likely that the database has reported what caused the error in the Event Log.
Chapter 6 - Administration
76
Error 8193 (0x2001) - Cancelled because of previous error Probable Cause The DD has reported an error that has not been handled by the calling application. This can happen when 3rd party applications are using the DD and fail to handle errors that occur. Resolution There is no simple resolution for this since the error is usually caused by incorrect use of the DD API. Error 12288 (0x3000) - Error Connecting to DD Probable Cause The DD Client cannot connect to the DD. Possible causes are: 1. The DD service is not running. 2. The DD Client is trying to connect to a service that does not exist or does not have a correct IP address associated with it. 3. The DD Client is trying to connect to the DD on a TCP/IP port that is invalid. Resolution Start the DD service and make sure it is running. Make sure that the DD service name has an IP address associated to it. This can be checked by pinging the service name (Example: ping dd-ho). If the service name does not respond to a ping command, DNS server or hosts files need to be reconfigured so that the service name points to the correct IP address. The DD Client is trying to connect to the DD on a port that is not in use. If the DD is using a port other than the default listen port (16750), one thing need to make sure that the port is specified in the Services file on computer and associated with the DD service name. However, if the default port is used, then there is no need to specify anything in the Services file. Error 12289 (0x3001) - Error the connection string was empty Probable Cause The connection string sent to a client control was empty. Resolution Please specify a correct connection string for the Distribution Location. Error 12290 (0x3002) - Could not log in Probable Cause The DD does not have the necessary permissions to log on to the specified company. Resolution Correct the access permission for the user-id used by the DD. Error 12291 (0x3003) - Connection temporarily unavailable Probable Cause All sessions that the DD is allowed to use are unavailable. Resolution Wait until the DD has released one of the sessions it is using or assign more sessions to the DD using the Data Director Settings Tool. Error 12292 (0x3004) - Cannot find Plugin ID in registry Probable Cause The DD is trying to use a Plugin that is not installed. This can also be caused by an incorrect plugin id (a typo) in the connection string. Resolution Install the correct plugin or modify the connection string so that it uses the correct Plugin ID. Error 12293 (0x3005) - Cannot load plugin dll Probable Cause The plugin has been registered and the connection string is probably correct. The path to the Plugin .dll file is probably invalid or the .dll file is missing. Resolution Uninstall and reinstall the Plugin to correct the registry setting. Try to remove and then add the DD server again.
Chapter 6 - Administration
77
Error 12294 (0x3006) - Plugin version not supported Probable Cause This version of the Plugin cannot be used with this version of the DD because the plugin belongs to a newer version of the DD. Resolution Upgrade the DD to the same version as the Plugin or install the version of the Plugin that matches the version of the DD. Error 12295 (0x3007) - Waiting for a previous package Probable Cause A previous package belonging to the same JobID has not been processed successfully. It can still be in status Waiting or Error. Resolution Take the appropriate action so that first packet which has not processed is processed successfully or cancelled. Error 12296 (0x3008) - Error connecting to a database Probable Cause The DD cannot connect to the database. Resolution There are two solutions to this: If the error occurs when the DD is trying to generate a packet, the connection string for the source database is probably incorrect. This can be fixed by specifying the correct connection string. An easy way to check if the connection string is correct is by running the Test Connection With DD function from the Location receiving the error. The Test Connection should be run on the same computer that is running the DD to make sure that all hosts and services configurations are correct on that computer. If this happens at the receiving DD the cause is probably that the connection string specified in the packet is not recognized by the receiving DD. The most common cause for this is that the server name in the incoming package is not valid on the receiving end. Example: a. The sender wants the packet to be read into database A running on server Store1. b. Server name Store1 is not recognized at the receiving end. This can be resolved by correcting the DNS and services entries for the computer running the receiving DD. Error 12297 (0x3009) - The Data Director license file does not include this plugin ID Probable Cause The plugin ID in the connection string may be incorrect. Plugin may not be included in the license being used. Resolution Check the Plugin IDs in the DD license (Use the Show Current button on the first page of the Data Director Settings Tool). Obtain a DD license that includes the plugin. Error 12304 (0x3010) - Maximum number of subnets allowed exceeded This can happen when a DD is forwarding a package to another DD. Probable Cause The number of subnets allowed according to the DD license file has been exceeded. Resolution Obtain a DD license with a higher number of subnets allowed.
Error 12305 (0x3011) - Error while sending package information Probable Cause Connection failure or unstable network. Resolution Wait for a retry. Check connections.
Chapter 6 - Administration
78
Error 12306 (0x3012) - Error while transferring package Probable Cause Connection failure or unstable Network. Resolution Wait for a retry. Check connections. Error 16384 (0x4000) - Error writing file Probable Cause Cannot write to disk. This can be caused by: 1. The disk being full. 2. The DD not having permissions to write to the work directory. 3. The path to the work directory specified in the Data Director Settings Tool being incorrect. Resolution Free some space on the disk, make sure that the process has the necessary permissions to write to the disk and make sure that the path to the work directory is correct. Error 16385 (0x4001) - Error creating file Probable Cause See Error 16384 Resolution See Error 16384 Error 16386 (0x4002) - Error reading file Probable Cause The DD was trying to read a file that does not exist. Resolution If the file is missing then the file need to be recreated by running the job again.
Summary This section explains Data Director Administration. Following are the topics included in this section: Data Director Monitor Data Director Administration Administration Setup and Administration Card Troubleshooting
Chapter 6 - Administration
79
7
7.1
Optimizing Replication
Preload Actions
Data distribution can be done using Actions in LS Retail. But if a new store/POS is opening, a fresh database needs to set up for that store/POS. In case all data were to be sent to this database, the best replication method would be By Normal. But in case only specific data, for example only selected items, prices and so on, should be replicated into this database, the replication method By Normal does not work without major workarounds. Creating Preactions by running a Modify on the requested tables could result in that the data will not only be sent to the new database but to others with existing data as well. In this case the Preload Actions functionality will help create only Actions for specific tables and specific distribution locations even without running a Modify. A specific combination of Scheduler Subjob and Scheduler Job creates the Actions that are used for the Preload and replicates the data. To Set Up a Scheduler Subjob for Preload Actions:
1. 2. 3. 4. 5. 6. 7. Click LS Retail Scheduler, Scheduler Subjob. Press F3 to create a new subjob. Fill in the ID and Description fields. In the From Table ID and the To Table ID fields, select the table(s) to replicate from/to. They should be the same. In the Replication Method field, select the By Actions option. The What To Do field must be empty, it is not used for replication By Actions. On the Scheduler Subjob, Replication tab, select Action Table ID as 99001527, Preload Action
7.2
Tables to Replicate
While doing the setup for replication to stores from head office and further to POS terminals, it need to be decided which tables to replicate based on the areas used, how posting is done and which tables are necessary for system functions. Default replication job data can be inserted by running Creating Default Data for LS Retail. The settings can be changed where needed. Following is a list of a typical replication setup where the head office posts statements:
Chapter 7 - Optimizing
Replication
80
4 6 13 15 18 27 90 91 97 98 204 231 252 325 330 340 5050 5401 5404 5722 5723 7002 7004 10000700 10000701 10000702 10000703 10000704 10000729 10000733 10000735 10000736 10000742 10000779 10000782 10000784 10000785 10000786 10001412 10001413
Currency Customer Price Group Salesperson/ Purchaser G/L Account Customer Item BOM Component User Setup Comment Line General Ledger Setup Unit of Measure Reason Code General Posting Setup VAT Posting Setup Currency Exchange Rate Customer Discount Group Contact Item Variant Item Unit of Measure Item Category Product Group Sales Price Sales Line Discount Retail Setup Retail Hierarchy Retail Hierarchy Value Retail Hierarchy Defaults Item Distribution MSR Card Link Setup Periods Item Special Groups Item/Special Group Link Retail Users Store Group Store Group Setup Attribute Setup Attribute Option Values Attribute Values Extended Variant Dimensions Extended Variant Values
If needed If needed No Yes Yes No If needed If needed No Yes If needed Yes Yes Yes If needed If needed Yes If needed Yes Yes If needed If needed Yes Yes Yes No If needed No No No Yes No If needed If needed No No No No No No No No
Actions Actions Actions Actions Actions Actions Actions Actions Actions Normal Normal Normal Normal Normal Normal Normal Actions Actions Normal Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions
Often Often Special Special Often Often Special Special Special Special Often Special Special Special Often Special Special Often Special Special Special Often Often Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special
Item Item
If needed If needed No If needed If needed No If needed If needed If needed Yes Yes Yes If needed Yes If needed If needed If needed If needed If needed Yes Yes Yes If needed Yes No No Yes No No No No No No No No No No No If needed No No No No No No No
Item
Item Item
Chapter 7 - Optimizing
Replication
Suggested Group
81
Replic. Method
Store to POS
POS to Store
Linked Table
Table Name
Store to HO
HO to Store
Frequency
Table No.
Data Director User Guide Item Variant Registration Variant Framework Setup Collection Framework Period Group Product Attribute Settings Barcodes Linked Item Periodic Discount Periodic Discount Lines Barcode Mask Staff Tender Type Tender Type Card Setup Trans. Tender Declar. Entry
10001414 10001417 10001430 10001446 10001451 99001451 99001452 99001453 99001454 99001459 99001461 99001462 99001464 99001465
If needed If needed No If needed If needed No If needed If needed No If needed No No If needed No Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes No Yes Yes Yes No No No No No No No No No No No Yes No
Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Normal Actions Normal Actions Actions Normal Source Normal Normal Normal Actions Normal Normal Actions Normal Actions Actions Actions Normal Source Normal Normal Normal Normal Normal Actions Actions Normal Normal Normal Actions
Special Special Special Special Special Often Often Often Often Special Special Special Special Special Special Setup Special Special Often Often Often Often Special Often Often Special Setup Special Special Special Often Setup Often Often Often Often Special Special Often Often Often Often 99001472 99001472 99001472 99001482 99001472 99001472 99001482 99001472 99001472 99001472
Item
Item
Item
Setup
99001469 Initial Entry No. in Loc. If needed If needed No 99001470 Store Yes Yes No 99001471 POS Terminal Yes Yes No 99001472 Transaction Header 99001473 Trans. Sales Entry 99001474 Trans. Payment Entry Trans. 99001475 Income/Expense Entry Income/Expense 99001476 Account 99001477 Trans. Coupon Entry 99001478 Trans. Infocode Entry Table Specific 99001479 Infocode Barcode Mask 99001480 Segment Discount Validation 99001481 Period 99001482 Infocode 99001483 Information Subcode 99001485 Posted Statement 99001486 Scheduler Setup 99001487 Statement 99001488 Statement Line Posted Statement 99001489 Line 99001490 Trans Inventory Entry Tender Type Card No. 99001491 Series POS Terminal Receipt 99001492 Text 99001493 Transaction Status Trans. Sales Entry 99001494 Status Trans. Mix Match 99001496 Entry 99001497 Distribution List No No No No Yes No No Yes Yes Yes Yes Yes No Yes No No No No No No No No Yes No No Yes No Yes Yes Yes No Yes Yes Yes Yes No Yes Yes No No No No No
Trans Trans
If needed No
Statement Statement
If needed No
Chapter 7 - Optimizing
Replication
82
Data Director User Guide 99001499 Distribution Subgroup 99001500 Distribution Group Distribution Group 99001501 Member 99001502 Promotion 99001503 Promotion Line Mix & Match Line 99001504 Groups 99001505 Multibuy Discount Line 99001506 Work Shift Setup 99001507 Work Shift RBO 99001508 Work Shift Entry 99001512 Distribution Location 99001513 Safe POS Functionality 99001515 Profile Default Product Group 99001517 Labels Staff Permission 99001518 Group File and Program 99001519 Setup Sales Quantity 99001526 Limitations Comparison Unit of 99001528 Measure 99001529 Conversion Value 99001530 Store Section 99001531 Section Shelf Item Group Section 99001532 Location 99001533 Item Section Location Item POS Text 99001534 Header 99001535 Item POS Text Line Barcode Mask 99001536 Character Yes Yes Yes Yes Yes Yes Yes Yes No No No No No Yes Yes Yes Yes Yes No No No No No No No No No No Yes No No No No No No No No No No No No No No No No Yes No No No No No No No No No No No No No No No No Actions Actions Actions Actions Actions Actions Actions Actions Normal Normal Source Special Actions Actions Actions Actions Actions Normal Actions Actions Actions Actions Actions Actions Actions Actions Actions Normal Source Normal Normal Normal Normal Normal Normal Normal Actions Actions Actions Actions Actions Actions Actions Setup Setup Setup Often Often Often Often Special Special Often Setup Setup Special Setup Special Special Special Special Special Special Special Special Special Special Special Special Often Often Often Often Often Often Often Often Special Special Special Special Special Special Setup 99001539 99001539 99001539 99001539 99001539 99001539 99001539 Item POS POS POS POS POS POS Item Staff POS
99001485
If needed No Yes No
If needed No
If needed If needed No Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No Yes Yes Yes Yes No No No No No No No No No No No Yes Yes Yes Yes Yes No No No No No No No No No
99001539 Archived Transactions No 99001540 Archived Sales Entry Archived Payment 99001541 Entry Archived 99001542 Income/Expense Entry Archived Infocode 99001543 Entry Archived Mix & Match 99001544 Entry Archived Tender 99001545 Declar. Entry Archived Trans. 99001546 Coupon Entry 99001548 Item Label 99001549 Item Label Setup POS Print Setup 99001550 Header 99001551 POS Print Setup Line POS Table Spec. Print 99001552 Setup 99001553 POS Print Variable 99001554 Dist Location Version No No No No No No No Yes Yes Yes Yes Yes Yes Yes
Chapter 7 - Optimizing
Replication
83
Data Director User Guide 99001557 99001558 99001559 99001560 99001563 99001564 99001568 99001569 99001570 99001571 99001572 99001573 99001574 99001575 99001585 99001586 99001587 99001588 99001589 99001590 99001591 99001592 99001594 99001595 99001596 99001610 99001617 99001618 99001619 99008900 99008901 99008902 99008903 99008905 99008906 99008907 99008908 99008909 99008920 99008921 99008922 99008925 99008926 99008927 99008933 99008934 99008935 99008936 POS Data Entry Type POS Data Entry Staff Statistics POS VAT Code Table Distribution Item Family Payment Statistics POS Terminal Statistics Sales Statistics Statistics Time Setup Shelf Label Setup Shelf Label and Poster Label/Poster Report Store Price Group Coupon Issuer Scheduler Job Header Scheduler Job Line Scheduler Subjob Job Receiver Group Scheduler Subjob Filter Scheduler Subjob Field List Scheduler Job Type Distrib. Include/Exclude List Scheduler Subjob Linked Table Scd.Subjob Linked Table Filter Cash Declaration Setup Link Conditions Table Links Distribution Sublocation POS Hardware Profile POS Menu Profile POS Key Command POS Key Mapping POS Menu Header POS Menu Line POS Help Texts POS Macro Header POS Macro Line POS Command POS Staff Perm. Group Menus POS Lookup POS OPOS Message POS Run-Objects POS Color POS Interface Profile POS Button Translation POS Parameter Setup POS Parameter Option Values Yes No No Yes Yes Yes No No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No Yes No No No No No No No No No Yes Yes No No No No No No No No No No No No Yes Yes No No No Yes Yes Yes No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No Yes No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No Actions Normal Normal Actions Actions Actions Normal Normal Normal Normal Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Actions Special Special Often Special Special Special Often Often Often Setup Special Special Special Often Special Special Special Special Special Special Special Special Special Special Special Special Setup Setup Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special Special POS POS POS POS POS POS POS POS POS POS POS POS POS POS POS POS POS POS POS POS
Stats POS Item Stats Stats Item Setup Item Setup Item Setup Item Setup Item Setup Scheduler Scheduler Scheduler Scheduler Scheduler Scheduler Scheduler Scheduler Scheduler Setup
If needed No If needed No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes If needed Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes If needed Yes Yes Yes Yes Yes
Chapter 7 - Optimizing
Replication
84
Data Director User Guide POS Button Parameters 99009200 Statistic View 99009201 Statistic View Lines 99008937
Yes No No
No No No
No No No
POS
7.3
For some tables, it is very important to exclude certain fields from replication. For example, not all table fields should be controlled from the head office database. Certain table fields can be decided to be maintained in the store databases only, such as item store ordering information, and therefore those fields must be excluded from the replication from head office to the stores. Following is a list of tables and fields that should never be replicated to stores from the head office: Table Number Table Name Field Number Field Name
99001486
Retail Setup
6 70 134
Local Store No. Log Scheduler Actions Scheduler is Running Last Date Checked Last Time Checked Next Check Date (can be replicate during setup, then add to exclude list) Next Check Time (can replicate this field first time, then add to exclude list) Error Occurred Last Message Text Run Status Not Active Closing Status Table should not be replicated Table should not be replicated Table should not be replicated
99001586
21 22 27 28 45 86 40
POS Terminal Scheduler Repl. Counter Scheduler Log Scheduler Log Line InStore Header 100
30 34
Replication Counter(should be added to Exclude field list) Table filter should be applied as follows Dist. Location To with filter type as ToLocation
10000777
100
Replication Counter(should be added to Exclude field list) Table filter should be applied as follows Dist. Location To with filter type as ToLocation
Chapter 7 - Optimizing
Replication
85
7.4
To run replication successfully, the replication information should fill in that will be used in all distribution locations in the system. To Specify General Replication Information:
1. 2. Click LS Retail - Scheduler, Setup, Scheduler Setup. The Scheduler Setup window appears. On the General tab, fill in the Days Actions Exist and Days Sched. Log Exists fields.
FIGURE 7.4 -1: SCHEDULER SETUP The Days Actions Exist field contains the number of days for which Actions should exist in the program. The codeunit Delete Logs (99001486) uses this field to restrict the number of actions that are deleted each time the codeunit is run. If 1 is entered in this field, the codeunit running today deletes all actions created before yesterday, that is, actions exist for yesterday and today. The Days Sched. Log Exists field contains the number of days the Scheduler Log should exist in the program. The codeunit Delete Logs (99001486) uses this field to restrict the number of scheduler log entries that are deleted each time the codeunit is run. If 1 is entered in this field, the codeunit running today deletes all scheduler log entries created before yesterday, that is, scheduler log entries exist for yesterday and today.
Chapter 7 - Optimizing
Replication
86
7.5
Use Case I Central Configuration In central configuration, Data Director is installed in one central location accessible to head office and all store databases. The DD Services will be created in this machine which will cater to head office as well as to all the stores.
HO
DD HO
......... .....
DD STn
FIGURE 7.5 -1: CENTRAL CONFIGURATION In this configuration all the distribution locations will be defined in the database from where the job will be initiated and the database should have the distribution location card defined for source and destination database and at the distribution location for source (for example head office) should contain the DD Service which is catering to source database (DD HO) and similarly for stores also. Use Case II Split Configuration In Split configuration, Data Director is installed at head office as well as at stores. The DD service catering to stores will be created at stores and services catering to head office will be created at head office. In this configuration, data should be pushed from source to destination. HO
DDHO
DD ST1
DD ST2
DD ST3
............. .
DD STN
Store1
Store2
Store3
StoreN
FIGURE 7.5 -2: SPLIT CONFIGURATION To push data from source to destination, the distribution location for source as well as for destination database should be defined in source database. The distribution location for store should contain DD service running at store and distribution location for head office should contain DD Service running at head office. Distribution locations should be defined absolutely in the same way whether it is head office database or store database.
Chapter 7 - Optimizing
Replication
87
Use Case III When Fixed IP is available only at head office When Fixed IP is available only at one central location, data needs to be pushed or pulled from stores. Store service will be used to read data from the store database and send data to head office service which will write data into the head office database. Store services will be used to pull data from head office; that is, store service will read data from the head office and write data into the store database. This configuration is little different from Split configuration; that is, the store server has the address of the head office server or DD services running at head office but the head office services cannot communicate with store services or the store server because the head office server cannot recognize the store server as it has no Fixed IP address. In this configuration, Distribution Location Cards should be defined in store databases and to pull data from store to head office, the distribution location should be defined for store as well as for head office with DD Service running at store that may be same service for both store and head office or it may be different. To send data from store to head office, one more distribution location card should be defined for head office which should contain the DD Service running at head office. Use Case IV When number of stores is very high In this case, different services can be created to forward data packages to a set of stores. For example if there are 100 stores there can be 10 forwarder services which will be used to forward data packages. 1 nd forwarder services will forward data package to 10 stores, The 2 service will forward to next 10 stores and so on. This will speed up the process and there can be different services to read and write the data at head office. In this configuration, distribution location cards should be defined in the head office database (Scheduler database if different from head office database).The Distribution location card for head office should have DD Service running at head office (If there are different services for reading the data and writing the data then in head office database, distribution location for sending data from head office to store should contain the reading DD service) and the distribution location cards for store1 to store 10 should contain distribution server as DD Service running at store (which will receive the package being forwarded by DD nd Service at head office) and in forwarder, DD Service running at head office in forwarder mode (2 stage Data Director for which data package owner is the DD service running at head office that created the package reading DD service). The same method is used to send data from store to head office, distribution location card for head office should be defined in the store database with DD Service running at head office (If there are different services for reading the data and writing the data then in head office database, distribution location for sending data from head office to store should contain the writing DD service) and distribution location card for store should be defined in store database with DD Service running at store.
Summary This section explains about Optimizing the Replication, Tables to replicate, Fields and Tables to exclude from replication. Following are the topics covered in this section: Preload Actions Tables to Replicate Fields and Tables to exclude from replication Specifying general replication information LS Configuration Use Cases.
Chapter 7 - Optimizing
Replication
88
8
8.1
Exercise
Exercise 1 - Replication of Items to Stores
You are replicating all items to all stores. We will start by setting up Distribution Locations, Scheduler Subjob, Scheduler Job Types and Scheduler Job in Source database.
1. Create a Distribution Location Card for source location with following information:
Company Name: CRONUS LS5.05 W1 Demo Data User id and Password: Super Distribution Server: DD-SERVER Server Name: HO Version: 5.00 Driver Type: Native Path to Cfront Directory: C:\Program Files\LS Retail\DataDirector\plugins\finplugin\cfront\500\ 2. 3. 4. Check the test connection from Functions, Test Connection with Data Dir. Setup the Distribution Location Card for destination locations in the same way as we did in steps 2 and 3. Create Scheduler Sub Job with the following information:
ID : ITEM From-Table ID: To-Table ID: Replication Method: Field Transfer Type: Action Table ID: 5.
Type Code: DD Distribution Restrictions: Include List Object Type: Codeunit Object No.: 99001466 (Data Distribution) 6. Create Scheduler Job Header with the following information:
Job ID: ITEM Scheduler Job Type Code: DD From-Location Code: HO (Source Location) Job Type: Data Replication Distribution Restrictions: Include List 7. 8. 9. Receiver Groups : All (Scheduler Job, Job, Receiver Groups) Receiver Location Include/Exclude : Destination Distribution Locations Code Enter Subjob ID: ITEM (Scheduler Job Line)
All required setup for Item Master replication is now complete. The Job ITEM can be replicated by clicking on Scheduler Job, Run Now or can be replicated by LS Scheduler automatically.
Chapter 8 - Exercise
89