ERP-CRM cloud based integration…Infor moving forward towards providing fully integrated business environment?

Infor Releases First in String of Tie-in Applications for

Infor  launched Inforce Everywhere, which is built on top of’s development platform, pulls data such as invoices and shipments from Infor ERP systems into’s CRM application, synchronizing the information with customer contacts, sales quotes and other associated data points.

For Full Story please visit …

News Courtsey :

Infor gets recognition for its excellent support

Infor Receives 2012 MarketTools Achievement in Customer Excellence Awards for Support and Consulting Services.The accolades demonstrate Infor’s outstanding commitment to customer satisfaction for its consulting services at the onset of a software purchase and to customer support to help keep Infor solutions running at peak performance. This is the fourth year in a row that Infor has won a MarketTools ACE Award.

For full story please do visit following link…. 


Create Vs Convert to Runtime Data Dictionary – when and why?

So being into BaaN, we all must have faced the option of create to run-time or convert to run-time option for tables/domains at least once. Often I have observed people badly stuck in dilemma while it comes to choose the best option between these two.

Today our discussion will try to answer following questions:-

1.Most important. most basic but often ignored question:- why it is required at all?

2.When to choose convert over create run-time and vice-versa?

3. How to troubleshoot issues/errors faced during CRDD (Create/Convert to Run-time Data Dictionary)?

Obviously I’ll be explaining in my own way and will be eagerly expecting suggestions/feedback from your side.

So here we go …….

1. Why it is required at all?

Create/Convert run-time both create/modify the application data into run-time data..okk this is a bit bookish explanation.

In simple term, you create a table via maintain table session, the metadata only gets created at application level.It is of no use until it get’s converted into run-time which the application refers at run-time. So run-time actually brings the application data at file-level in the Operating System under the path ${BSE}/dict  or ${BSE}/../dict [ this one is the traditional path used in BaaN IV, but there is no restriction about where the path should be].

Tables and Domains’s run-time data gets created/converted in the same path.

Once the run-time data is ready it is then ready to get created in the database. Database definition of the table must be always in sync with RTDD( run-time data dictionary) of table, otherwise there is a problem in accessing the table. Error 512 will come up.

2. When to choose convert over create run-time and vice-versa?

Well the answer is pretty simple.

When you create a table / domain for the very first time ( i.e they were not present in application as well as database ) go for create run-time option.

When the table/domain is already existing there is change in that particular table/domain definition go for convert to run-time option.

So what really convert-to run-time does? Is there any extra layer of complexity involved?

Yes, there is a point to worry about….Consider following cases:-

a. suppose your table definition contains 5 fields. There are 1000 rows in the table.

You delete 2 fields. Now after CRDD  the database definition of the table must get synced with RTDD; that implies that in database too, there will be only 3 fields.

So what about those 2 fields and there existing value in the table?

b. Your table definition contains 5 fields again and it is having 1000 rows.

This time you add 2 more fields in the table. What value are you expecting for those 2 fields for each of 1000 rows present , after proper CRDD?

These 2 cases has been taken into account in the underlying functionality of Convert to run-time data dictionary.

Here is how it works,

1. just before starting convert, BaaN takes dump/backup of existing data.

2. It drops the table and drops thr table definition.

3. It creates the table definition as per new definition mentioned in application.

4. It creates the table at database side( it is clear that database definition also gets created as per new definition from RTDD)

5. Then it checks the column/field  mismatch between newly created table in DB and the backup taken in step 1.

6. In case of point a. explained above, it simply discards those two fields and pump data back into table for rest 3 fields.

    In case of  point b. , it checks the domain/data type for those 2 newly added fields and sets there value as per their default value; i.e if the new 2 fields are integer default value will be 0(numeric ZERO) , if they are string default value will be ” “(Blank/Space but not NULL).Again the default value depends on the domain type and the default value set for that domain.

3. How to troubleshoot issues/errors faced during CRDD (Create/Convert to Run-time Data Dictionary)?

Mostly you’ll the error you’ll face during CRDD is : “serious error in bdbreconfig”

This means that not all domain/table could be converted into run-time.

There might be many reason.Some of the most common reasons are:-

1. Your are converting your table without converting the domain linked to the table field.

2.”.new” file generated during reconfig and it could not be removed.

3. Permission problem in the directory path where your RTDD will get stored.

Solution of the problem will depend on the situation, discussing solution of every possible solutions will be a bit lengthy here…instead I’ll give you standard process checklist following which you’ll over come the problem most of the time :

1. before starting CRDD process take seq dump of table and keep it safe with you.

2. make sure domain is always converted/created before you convert table.after converting/creating domain independently logout-login to refresh bshell cache.

3. check access permission of the directory path i.e. ${BSE}/dict/{package_combination}

4. During CRDD if  “.new” file is generated then do the following:-

a. remove the .new file, remove the .old file, rename the existing table-def file .eg:- if during CRDD of tdsls401 .new file is generated then you can see 3 files , tdsls401.old, tdsls401 and .old and .new and rename tdsls401 as say tdsls401.fullonbaan.

b. delete entries of this particular table from following table via ttaad4100 (GTM):

ttadv500 – Conversion indicators
ttadv501 – Reconfiguration indicators
ttadv502 – Conversion parameters table def. / domains
ttadv503 – Reconfiguring Restart Data
ttadv504 – Reconfiguring Restart Data II
ttadv505 – Logged changes

c. logout and login. then run CREATE RUNTIME and not convert….

In most of the cases this will help you out.

Obviously there are some cases where you have to go to DB and drop the table and recreate it. At that point of time the seq dump you have taken will come into picture.Obviously you cannot upload it directly then as table definition has changed. So you have to delete/add columns from sew dump manually in order to align the data with current table definition and then upload it.


So, you see it’s all pretty simple…try it yourself and let me know what all new things you explore!!

Waiting to hear from you guys soon!!


Directory Structure of Baan

BaaN Software Environment – $BSE

Baan Software Environment is basically a UNIX variable which stores the BAAN environment files. All the files pertaining to the BaaN Software is stored in BSE.

Directory Structure


This folder basically consists of binary or compiled program files. The files in this folder are platform dependent and cannot be ported to other machine. This folder consists of all the .exe files pertaining to the BaaN software for e.g. bshcmd.exe, bdbpre.exe, bdbpost.exe etc.


This folder consists of the following files
 Program objects
 Program Scripts
 Report Scripts
 Include Scripts

So in the Application folder we have a subfolder {Package}{VRC}

ex: tdB40Cc4live


So for forms we will have one more subfolder under tdB40c4live f {package} {module}{language} ex : ftdpur2

So under this particular package and module will be the actual definition of forms

Ex: f{mod}{session no} {form No} {lang}

So the basic path of this form will be
$BSE\application\ tdB40Cc4live\ ftdpur2\ fpur0100m00012


So for Menus we will have one more subfolder under tdB40c4live m {package}{module}{language}

ex : mtdpur2


So for Reports we will have one more subfolder under tdB40c4live r {package}{module}{language}

ex : rtdpur2

Program Objects:

So for Program Objects we will have one more subfolder under tdB40c4live
o {package}{module} ex : otdpur

Program Scripts:

So for Program Scripts we will have one more subfolder under tdB40c4live
s {package}{module} ex : stdpur

Include Scripts: So for Include Scripts we will have one more subfolder under tdB40c4live
i{package}{module} ex : itdpur

Include 6.1

This folder will basically contain the include files consisting of common function.

Lib – (Library or Information files)

This folder consits of Library or information files.  Printinf – these consist of printer drivers or printer information
 Terminf – these consists of terminal drivers or information files
 Locale – These consists of multibyte information files
 Informix – Information pertaining to the database.
 Oracle – Information pertaining to Oracle database


This consists of data dictionaries and domains.
The syntax is
dd{Package Combination}
Ex: ddB40CC4liv

Under this folder we have
d {Package}{module} – data dictionary of a table
d {package}.pd – Domains


This has two folders one for data dictionary and other for application.


This is used to store intermediate or temporary files.

Data Warehouse

Data Warehouse

A very large database that stores historical and up-to-date information from a variety of sources and is optimized for fast query answering.

It is involved in three continuous processes:

1) At regular intervals, it extracts data from its information sources, loads it into auxiliary tables, and subsequently cleans and transforms the loaded data in order to make it suitable for the data warehouse schema;

2) It processes queries from users and from data analysis applications; and

3) It archives the data that is no longer needed by means of tertiary storage technology.
Most enterprises today employ computer-based information systems for financial accounting, purchase, sales and inventory management, production planning and control. In order to efficiently use the vast amount of information that these operational systems have been collecting over the years for planning and decision making purposes, the various kinds of information from all relevant sources have to be merged and consolidated in a data warehouse.
While an operational database is mainly accessed by OLTP applications that update its content, a data warehouse is mainly accessed by ad hoc user queries and by special data analysis programs, also called Online Analytical Processing (OLAP) applications. For instance, in a banking environment, there may be an OLTP application for controlling the banks’s automated teller machines (ATMs). This application performs frequent updates to tables storing current account information in a detailed format. On the other hand, there may be an OLAP application for analyzing the behavior of bank customers. A typical query that could be answered by such a system would be to calculate the average amount that customers of a certain age withdraw from their account by using ATMs in a certain region. In order to attain quick response times for such complex queries, the bank would maintain a data warehouse into which all the relevant information (including historical account data) from other databases is loaded and suitably aggregated.
Typically, queries in data warehouses refer to business events, such as sales transactions or online shop visits, that are recorded in event history tables (also called `fact tables’) with designated columns for storing the time point and the location at which the event occurred. Usually, an event record has certain numerical parameters such as an amount, a quantity, or a duration, and certain additional parameters such as references to the agents and objects involved in the event.

While the numerical parameters are the basis for forming statistical queries, the time, the location and certain reference parameters are used as the dimensions of the requested statistics. There are special data management techniques, also
called multidimensional databases, for representing and processing this type of multidimensional data.

Locking Concept in Baan


Database inconsistencies can arise when two or more processes attempt to update or delete the same record or table. Read inconsistencies can arise when changes made during a transaction are visible to other processes before the transaction has been completed for example, the transaction might subsequently be abandoned.
To avoid such inconsistencies, BaanERP supports the following locking mechanisms:  record/page locking, table locking, and application locking.

To ensure that only one process at a time can modify a record, the database driver locks the record when the rst process attempts to modify it. Other processes cannot then update or delete the record until the lock has been released. However, they can still read the record. While one process is updating a table, it is important that other processes retain read consistency on the table. Read consistency means that a process does not see uncommitted changes. Updates become visible to other processes only when the transaction has been successfully committed. Some database systems do not support read consistency, and so a dirty read is possible. A dirty read occurs when one process updates a record and another process views the record before the modi cations have been committed. If the modi cations are rolled back, the information read by the second process becomes invalid. Some databases, such as SYBASE and Microsoft SQL Server 6.5, use page locking instead of record locking. That is, they lock an entire page in a table instead of an individual record. A page is a prede ned block size (that is, number of bytes). The number of records locked partly depends on the record size.

Delayed locks

Locking a record for longer than required can result in unnecessarily long waiting times. The use of delayed locks solves this problem to a great extent. A delayed lock is applied to a record immediately before changes are committed to the database and not earlier. When the record is initially read, it is temporarily stored. Immediately before updating the database, the system reads the value of the record again, this time placing a lock on it. If the record is already locked, the system goes back to the retry point and retries the transaction. If the record is not locked, the system compares the content of the record from the rst read with the content from the second read. If changes have been made to the record by another process since the rst read, the error ROWCHANGED is returned and the transaction is undone. If no changes have occurred, the update is committed to the database. You place a delayed lock by adding the keyword FOR UPDATE to the SELECT statement.

For example:
table tccom001

SELECT tccom001.* FROM tccom001 FOR UPDATE
tccom001.dsca = “….”
db.update(ttccom001, DB.RETRY)

A retry point is a position in a program script to which the program returns if an error occurs within a transaction. The transaction is then retried. There are a number of situations where retry points are useful:
 During the time that a delayed lock is applied to a record/page, an error can occur that causes the system to execute an abort.transaction(). In such cases, all that BaanERP can do is inform the program that the transaction has been aborted. However, if retry points are used, the system can automatically retry the transaction without the user being aware of this.  Some database systems generate an abort.transaction() when a dirty record is read (that is, a record that has been changed but not yet committed). An abort.transaction() may also be generated when two or more processes simulta-
neously attempt to change, delete, or add the same record. In all these situations, BaanERP Tools can conceal the problem from the user by using retry points. It simply retries the transaction. If there is no retry point, the transaction is aborted and the session is terminated.

In BaanERP, updates are bu ered, so the success or failure of an update is not known until commit.transaction() is called. If an update fails, the commit of the transaction also fails, and the entire transaction must be repeated. If retry points
are used, the system automatically retries the transaction.  Retry points can also resolve potential deadlock problems. If, for example, the system is unable to lock a record, it rolls the transaction back and tries again. It is vital that retry points are included in all update programs. The retry point for a transaction must be placed at the start of a transaction. The following example illustrates how you program retry points:

db.retry.point() | set retry point
if db.retry.hit() then
…… | code to execute when the system
| goes back to retry point
…… | initialization of retry point

The function db.retry.hit() returns 0 when the retry point is generated that is, the rst time the code is executed. It returns a value unequal to 0 when the system returns to the retry point through the database layer. When the system goes back to a retry point, it clears the internal stack of functions, local variables, and so on that were called during the transaction. The program continues from where the retry point was generated. The value of global variables is NOT reset. When a commit fails, the database automatically returns to its state at the start of the transaction; the program is set back to the last retry point. It is vital, therefore, that the retry point is situated at the start of the transaction. The db.retry.hit() call must follow the db.retry.point() call. Do not place it in the SQL loop itself as this makes the code very untransparent. When a retry point is placed within a transaction, the system produces a message and terminates the session.

Table locks

BaanERP provides a table locking mechanism, which enables you to lock all the records in a speci ed table. A table lock prevents other processes from modifying or locking records in the table but not from reading them. This is useful when a particular transaction would otherwise require a large number of record locks. You use the db.lock.table() function to apply a table lock.

Application locks

An application lock prevents other applications and users from reading and/or modifying an applications data during critical operations. It is not part of a transaction and so is not automatically removed when a transaction is committed. Instead, an application lock is removed when the application ends or when appl.detete() is called.

Dynamic SQL in Baan

What is Dynamic SQL?

Dynamic SQL is a programming technique that accepts and executes SQL statements “On The Fly” at runtime. It adds flexibility and functionality to your applications. Dynamic SQL statements are not embedded in your source program. Instead they are stored in character strings input to or built by the program at runtime.

Why Dynamic SQL?

Most database applications do a specific job. For example, a simple program might prompt the user for an employee number, then UPDATE rows in the EMP and DEPT tables. In this case, you know the makeup of the UPDATE statement at pre-compile time. That is, you know which tables might be changed, the constraints defined for each table and column, which columns might be updated, and the data type of each column.

However, some applications must accept (or build) and process a variety of SQL statements at runtime. For example, a general-purpose report writer must build different SELECT statements for the various reports it generates. In this case, the statement’s makeup is unknown until run time. Such statements can, and probably will, change from execution to execution. Dynamic SQL is used in this situation. Another important criterion is the execution time. In some cases the Dynamic SQL can fetch data from different tables using same SQL

Advantages and Disadvantages of Dynamic SQL

Programs that accept and process dynamically defined SQL statements are more versatile than those using static embedded SQL statements. For example, your program might simply prompt users for a search condition to be used in the WHERE clause of a SELECT, UPDATE, or DELETE statement. A more complex program might allow users to choose from menus listing SQL operations, table and view names, column names, and so on.
The fact that the SQL statements can be dynamically changed, can be utilized to eliminate redundant code. This is applicable in a situation where records from a table can be selected based on the difference selection criteria based on some input parameters. This would typically get translated into two select statements separated by if statement or case statement. If the processing required for each of the records is very complex, duplicating it may result in lower maintenability of the code. This can be avoided using the dynamic SQL. Thus, dynamic SQL lets you write highly flexible applications.

However, some dynamic queries require complex coding, the use of special data structures, and more runtime processing.You might find the coding difficult, unless you fully understand dynamic SQL concepts and methods.
In practice, static SQL will meet most of your programming needs. Use dynamic SQL only if you need its open-ended flexibility. Dynamic SQL can be used in some cases where one or more of the following is unknown at pre-compile time:
· text of the SQL statement (commands, clauses, and so on)
· the number of pseudo variables
· the data types of pseudo variables
· references to database objects such as fields, indexes, tables
But it depends on programmer what he emphasizes on: efficiency or flexibility and also on the scenario.

How Dynamic SQL Statements Are Processed

1. Typically, application programs may prompt a user for the values of pseudo variables used in the statement.
2. Then statements are parsed; that is, examined to make sure that it follows syntax rules and refers to valid database objects.
3. Next, pseudo variables are bind to the SQL statement. Binding means passing the addresses of pseudo variables in the SQL statement to BaaN so that BaaN can read or write their values.
4. Then the SQL statements are executed .The SQL statement can be executed repeatedly using new values for the pseudo variables.
5. Then each record is fetched one by one from the table. These fetched records can be processed further.

SQL Programming

To use dynamic SQL in baan following functions are available:
1) SQL.PARSE Used to form a query
2) SQL .SELECT.BIND Used to bind a pseudo variable (in select statement) with program variable.
3) SQL.WHERE.BIND Used to bind a pseudo variable (in where statement) with program variable.
4) SQL.EXEC To initialize the query.
5) SQL.FETCH To execute the query.
6) SQL.BREAK To stop the execution of the query.
7) SQL.CLOSE To delete all internal information of the query.