Aktuelle Informationen

Neuigkeiten über Brandeis Consulting, Informationen mit einem aktuellen zeitlichen Bezug

The 2nd edition of SQLScript for SAP HANA is now available.

Officially, the 2nd edition of SQLScript for SAP HANA , the English edition of my book, has been available since mid-August. Today I received my author copies from SAP PRESS and for the first time I am holding a bound copy in my hand. The format has become slightly larger. And the colouring corresponds to the German edition.

Comparison 1st and 2nd edition
Comparison between 1st and 2nd edition

In terms of content, a lot has changed. Some topics are new, some small things have been corrected or optimized and one item has even been removed: the Web-Based Development Workbench on the Neo Cloud environment will probably no longer play a role.

New topics in the 2nd edition of SQLScript for SAP HANA

Some topics have been added in SP3 and SP4 of HANA 2.0, which have also found their way into the new edition. And I had simply forgotten one important point in the first edition. 😉

What else has changed from the first edition

And, of course, there are countless other minor additions and corrections. For example, many screenshots of the BW interfaces in the SAP GUI were replaced by the new BW/4HANA interfaces in Eclipse.

Although the content has increased, the new edition has fewer pages. This is due to the new, larger format.

Is it worth buying if you already have the first edition?

The basics about SQLScript in the book haven’t changed. Thus, the update to the 2nd edition is not necessary for beginners in the subject. But for professionals interested in the latest features, it may be worth buying. And, of course, it should be the current edition for the company library, so that employees are always up to date.

The 2nd edition of SQLScript for SAP HANA will be published in August

The 2nd edition of the English edition of SQLScript for SAP HANA is scheduled for release on August 26, 2021. The issue is updated and expanded with a few new topics. It thus largely corresponds to the current German edition.

New topics

SQLScript for SAP HANA 2nd Edition

Other topics include:

  • Lateral Join – The term lateral join is somewhat misleading, as it is not another type of join, such as inner join or left outer join. Rather, a lateral join is a join where the join partner is a correlated subquery or function. You can think of a lateral join as executing the subquery separately for each row of the previous result. It is possible to access the data of the current row by the correlation name.
  • The SQLSCRIPT_STRING library provides additional functions for processing character strings:
    • With SPLIT you can split a string of separators into several small sections
    • With FORMAT you can format a string with a syntax from the Python language
    • And with TABLE_SUMMARY you can make a string out of a table
  • With ABAP BASIS 7.53, scalar AMDP functions are possible. This is very important, especially for BW/4HANA routine development, since this was not possible until now.
  • Data series can be generated with the SERIES_GENERATE functions. This is especially handy for date values.

And, of course, there are countless other minor additions and corrections. For example, many screenshots of the BW interfaces in the SAP GUI were replaced by the new BW/4HANA interfaces in Eclipse.

Neo environment will be switched off

The Neo environment of the SAP Cloud Platform will be switched off on November 14th! This also means that a popular opportunity to create a HANA instance with SQL console is disappearing…

Next week will be followed by a blog post on how to achieve this in the Cloud Foundry environment.
The e-learning course SQLScript Basics on https://www.brandeis-training.de will also be updated accordingly.

SQLScript Unit Tests, the End-User Test Framework

With version 2.0 SPS04 of the SAP HANA database, the so-called “End-User Test Framework in SQLScript” was introduced. This makes it possible to create SQLScript Unit tests in the form of  User Defined Libraries (UDL). These can then be SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA executed automatically when the procedure is called.

What is being tested?

Most of the time, test cases are used to check for a subprogram whether the expected output is also generated when a given input is entered. In the case of databases, therefore, these are procedures and functions that are primarily tested. However, you can also test the state of the database by querying, for example, the number of records in a table. For example, in the following example, we can verify that the scripts for the demo data model were successful and generated the expected number of records.


To explain the concept of the Test Framework, let’s start with a simple example. In Listing 1.2 you will find a test library that contains only the bare essentials. It consists of only T_AUFGABEN_CNT one test procedure . It checks whether there are exactly 1000 entries in the table AUFGABEN.

CREATE LIBRARY test_demo_dm 

 PUBLIC PROCEDURE t_aufgaben_cnt 
 USING sqlscript_test AS test;
 test:expect_eq((SELECT COUNT(*) FROM tasks),
 1000 );

To run this SQLScript Unit test in our test library, we call the following procedure in the SQL console. You may need to customize the name of the schema:

 "library":"TEST_DEMO_DM"}', ?, ?, ?);

The result of this procedure call is 3 tables. In our case, only in the first one there is actually an entry, because the test was successful. As soon as a test fails, the 2. Table a message is issued and the 3. Table contains details about the call stack of errors. Below is an example of the output tables

The language of the tests

The first thing that is striking about the small example is that the library is created with the language SQLSCRIPT TEST. This distinguishes test libraries from productive libraries. In fact, the implementation of the test is also SQLScript.

Pragmas for procedures

Pragmas are used to classify the individual procedures of the test libraries. The most important role is played by the parameterless test procedures. These are marked by a leading pragma @test() .

In order to establish a defined system state  before the test, a corresponding procedure can be marked with the pragma @TestSetUpLibrary().

After the tests are performed, the procedure with the pragma @TestTearDownLibrary() may be called to clean up after the test procedures.

Test procedures

Within the test procedures, the system’s test procedures can be used to verify SQLSCRIPT_TEST the states of the program. The different inspection procedures can be found in Table 1.1. Any scalar expression can be used as a parameter value of 1 or 2.

Testing procedure Description
EXPECT_EQ(<Wert1>, <Wert2>) Checks whether both values are equal
EXPECT_NE(<Wert1>, <Wert2>) Checks whether the two values are different
EXPECT_GE(<Wert1>, <Wert2>) Checks whether value 1 is greater than or equal to the value 2
EXPECT_GT(<Wert1>, <Wert2>) Checks whether value 1 is greater than value 2
EXPECT_LE(<Wert1>, <Wert2>) Checks whether value 1 is less than or equal to the value 2
EXPECT_LT(<Wert1>, <Wert2>) Checks whether value 1 is less than value 2
EXPECT_NULL(<Wert>) Checks whether the value is null
EXPECT_TABLE_EQ(<Tabelle1>, <Tabelle2>[,<Reihenfolge ignorieren>]) Checks whether the two tables are identical. The order does not matter, as long as the corresponding flag is not set to FALSE.
FAIL(<Nachricht>) The test is set to failed and the Message is issued

Modularization in the test library

These check procedures of our SQLScript Unit tests do not need to be called directly in the test procedure. It can also be further modularized within the test library. For example, if multiple tables should be checked for the number of records it contains, it is a good way to offload this part of the code to a separate procedure to avoid unnecessary repetition.

In Listing 1.3, the logic to check the number of records within a table was outsourced to a private procedure. This makes it easy to check a lot of tables without having to repeat the same code every time.

 PRIVATE PROCEDURE check_table_count
 (IN iv_tabname NVARCHAR(30),
 IN iv_expected_count INTEGER)
 USING sqlscript_test AS test;
 DECLARE lv_query NVARCHAR(5000);
 lv_query = 'SELECT COUNT(*) FROM' || :iv_tabname;
 EXEC lv_query INTO lv_count;
 test:expect_eq( :lv_count, :iv_expected_count );
 PUBLIC PROCEDURE t_table_count 
 check_table_count( 'TASKS', 1000);
 check_table_count( 'USER', 30);
 check_table_count( 'STATUS', 6); 
 check_table_count ( 'STATUS_TEXT', 12);
 --check_table_count( 'STATUS_TEXT', 13);


If we run the above test library and have done everything correctly, we will not receive an error message. To demonstrate, I changed STATUS_TEXT the parameters for the table. See the following three illustrations for the three return tables in the test. The first table shows us that the test procedure was executed in the test library with the result FAILED:

Result of the test execution

In the second table we find the reason for the error. In our case, the number of records is not correct.

Error messages of test execution

The third table now returns the corresponding call stack.

Call stack of reported bugs

Further concepts

In addition to these basic concepts, there is also the possibility to use so-called configurations to create different preconditions for create the test procedures.

The test procedures, in turn, can divided into groups by a classification.

When calling SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA  the procedure, you can enter a JSON document that specifies in detail which procedures with which classifications should be executed in which configurations.

These details can be found in the reference documentation SAP HANA SQLScript Reference for SAP HANA Platform.


With the Test Framework, we now have a very useful tool in our hands that allows us to automatically test our programs at the database level. Unit testing is now a widely used standard, which is also applicable to SQLScript developments. Test-driven development (TDD) is also possible with the framework.

User Defined Libraries (UDL) in SQLScript

SAP HANA 2.0 SPS03 introduced the concept of User Defined Libraries (UDL) in SQLScript. It enables us as developers to bundle several functions, procedures and variables in one object. All components of a UDL are defined in one piece of source code and only one metadata object is created.

In some respects, a library is similar to a static class in ABAP, that is, a class that contains only class methods and variables. The visibility of individual components of a library can be controlled using the keywords PUBLIC and PRIVATE. The PRIVATE property specifies that this component can only be accessed within the same library. Using PUBLIC, on the other hand, you can allow free access from outside.

The lifetime of the data of a variable of a UDL is limited to one session. Public variables can also be accessed from outside the library.

Direct access to components

Within the same library, all components can be used directly. The variables of the library are also directly visible in all procedures and functions of the library. Outside the library, the component must be preceded by the name of the UDL:


Access with alias

Up to SAP HANA 2.0 SPS03, access from outside the library is only possible within a logical container. To do this, an alias must be created there for each library, which can then be used to access the components. Only one alias name is possible for each library. With SPS04, direct access was allowed, which also allows the use of components outside logical containers.

The example in the following listing shows the different ways of accessing the components of the UDL:

  • within the same library
  • outside the library with the library name
  • in a different logical container with alias name
    ov_result = :cv_42;
  PUBLIC PROCEDURE call_proc1(OUT ov_result INTEGER)
   CALL proc1(ov_result=>ov_result);

CALL testlib:call_proc1(?);
SELECT testlib:cv_42 FROM DUMMY;

  USING testlib AS lib; --Definition des ALIAS  
  SELECT lib:cv_42 FROM DUMMY;

Listing 1: Definintion of a UDL with a constant

In the next listing you will see how you can use these constants in a query elsewhere. A prerequisite for this is of course the declaration with the keyword PUBLIC.

  FROM aufgaben
 WHERE id < const:cv_max
   AND id > const:cv_min;

Listing: Usage of the constant of the previous listing

Unfortunately, there are restrictions when using variables and constants outside the library, so it may be necessary to copy a variable from a library to a local variable first. This is the case, for example, if you want to use a value in the TOP clause of a query.

The other way around, direct assignment of a UDL variable with the SELECT … INTO statement is not possible. Instead, a local variable must first be filled, which is then assigned to the variable from the UDL.

Bottom Line

Overall, concept of user-defined libraries does not yet seem 100 percent mature. Nevertheless, it already offers many useful features. It remains to be hoped that the next service pack will further round out the concept. An integration of UDL into the AMDP framework would also be desirable.

Anonymous Functions in SQLScript

With SAP HANA 2.0 SPS04, another feature has been added to the SQLScript language: The embedded functions. These enable the use of imperative SQLScript code within a SELECT query. These functions are created and executed only for exactly this one query. This reminds you of an anonymous block, which is why they are also called Anonymous functions.

In the following example, the query is called with an embedded function within a procedure.

  • The embedded function starts at line 11 and ends at line 37. It contains imperative coding, making the whole procedure imperative.
  • The parameter IV_MAX of the procedure is passed to the parameter IV_A of the function in line 11. This parameter is then used in line 24 as the upper limit for the FOR loop. Direct access to IV_MAX within the function is not possible.
  • The WHERE condition in line 38 shows once again that this is a query.
  • You can test the procedure with the CALL statement in line 41.

I do not recommend the use of embedded functions, as even simple functions can considerably impair the readability of the code. You could clearly see this in the example shown.

As an alternative, a separate UDF function can be created or the query can be split into two steps:

  • Creation of a table variable with the imperative code
  • Querying this table variable

This decomposition makes it easier to read the code and it is possible to view the intermediate results in the debugger. However, if these alternatives are not possible for technical reasons, the embedded functions allow us to use imperative code directly in a query.

SAP Documentation for embedded functions in SQL queries

Recursive procedure calls in SQLScript

Up to SAP HANA 2.0 SPS03 recursive calls in SQLScript were not allowed. This means that a procedure may not call itself either directly or indirectly via other procedures. This restriction was removed with SPS04. However, the call depth is limited to 32 and procedures in libraries also do not allow recursive calls. In the listing you see an example for the recursive calculation of the Fibonacci numbers. Please be careful, if you call this procedure with too large parameters (>20), it can quickly happen that the SAP HANA system reaches its limits.

CREATE PROCEDURE fibonacci(IN iv_value INT,
OUT rv_result INT)

IF iv_value IN (1, 2)
THEN rv_result = 1;
fibonacci( iv_value - 1 , rv_result );
fibonacci( iv_value - 2 , lv_tmp );
rv_result = rv_result + lv_tmp;

CALL fibonacci(20, ?);

Listing: Recursive calculation of the Fibonacci Numbers

Here is the link to SAP documentation on the topic. As far as I know, recursive queries are not supported.

Optimal Package Size for DTPs in BW/4HANA

With BW/4HANA 2.0 I observed that the packet size of the DTPs is dynamically suggested by the system. Figure 1 shows, for example, a suggested size of 1,311,000 data records. In this way, the system dares to go into an order of scale, that is better than the former fixed standard value of 100,000 rows based on my own practical experience.

Figure1: Suggested Package Size for a DTP

In this blog post I have collected some useful information regarding the DTP package size and added an own example with a measurement series.

Actual packet size for HANA execution

When HANA executes DTPs, we only see exactly one number of records for each packet in the monitor. For ABAP execution, a number of records was specified here for each substep: Extraction, transformation, insertion, and so on. However, these substeps can no longer be distinguished in HANA execution. All logic from extraction, transformation rules, routines, and this possibly over several levels, is included in a single CalculationScenario, from which the data is read. And this scenario is optimized as a whole. This means that we no longer have individual values for the respective steps. And this also explains why the number of data records in the monitor is only displayed after a whole package has been processed, and not step by step as was the case in the past.

How much data is actually processed at once depends on several parameters. The package size in the DTP specifies the number of data records in the source. Normally, N packages of this size are formed, the last of the packages is correspondingly smaller.

If the data is retrieved request by request in delta mode, this procedure is repeated for each individual request. This means that for each request there is a package that is not the full package size.

If semantic partitioning is set, as many partitions as necessary are inserted into each package until the package size is exceeded. If the partitioning criteria are chosen inappropriately, this can result in enormously large packages or uneven package sizes.

However, the actual package size is also influenced by the transformation routines. If, for example, a filter with a WHERE is applied to the INTAB, the number of data records in the packet is reduced accordingly. This condition is pressed down into the extraction of the data. However, since the packet building takes place before the individual packets are processed, the packets are effectively reduced by the filter criterion.

Conversely, it can also happen that the actual packet size is increased by the logic in the routines. This is the case, for example, if data is multiplied by a join or if data is transposed from a key figure model to a corresponding characteristic model.

The actual number of data records is important for good performance.

Optimal Package Size for HANA Execution

An unfavorable package size increases the runtime. With HANA execution, the packages can be considerably larger than with ABAP execution. An order of scale of 1,000,000 data records is in most cases a good start value if you do not receive any proposals from the system. It is important that this is the actual number of data records. If your routines filter out 90% of the data records, you should take this into account in the package size of the DTPs. If the packages become too large, the number of parallel processes may not be used. You can see an example of this in the following example.

The optimum package size can only be determined by tests. These tests should be performed on the production system with real data.

Example of a DTP with Processing Logic

In the following, I show the runtimes for a DTP that contains a transformation that creates 16 records from one source record by transposing it. This is necessary to convert the PCA plan data from a key figure based model into a characteristic based model in which each data record represents a period. In the example, there are 1,994,590 data records in the source, from which 31,913,440 data records are converted in the target by the routine.

Package Size in the DTPActual number of Data RecordsRuntime in Seconds

Table 1: Runtime depending on the Package Size

The default value of the BW/4HANA system for the DTP greetings is 1,000,000 data records. However, this is no longer optimal due to the duplication of the data records. If we choose a package size instead, so that the actual number of data records corresponds to approx. 1,000,000, then the runtime is considerably better. In our example, the optimum is 1.6 million actual records.

Influence of the Number of Work Processes in HANA Execution

In addition to the package size, the number of work processes also has an influence on the total runtime for a DTP. However, the effect of doubling the number of work processes is by far not as great as you would naively expect:

For the above example, I have increased the number of work processes from 3 to 6. This shortens the runtime from 138 seconds by only 14% to 121 seconds.

From ABAP execution, we know here a halfway linear dependency on runtime and number of work processes. When you execute the process chains, you must always make sure that the total number of background processes (type BTC) is sufficient.


The packet size is an important parameter for optimizing the DTP runtime. With the HANA version, we can now choose larger packages. The default value of BW/4HANA 2.0 must sometimes be adjusted, especially if the transformation logic changes the actual number of data records. Often the only thing that helps here is to try it out.

BW/4HANA Cockpit – Where is the Administration in the SAP GUI?

BW/4HANA 2.0 no longer supports the administration of InfoProviders and requests in the SAP GUI. Instead, a browser window with the SAP BW/4HANA cockpit opens when we click on the administration of the ADSOs or start a DTP and the DTP monitor opens. You can find the SAP documentation for the BW/4HANA Cockpit here: https://help.sap.com/viewer/107a6e8a38b74ede94c833ca3b7b6f51/2.0.0/de-DE/2447401c5d01428a9c4bb8edbd567cd8.html

The BW/4HANA Cockpit is a Fiori Web application that certainly has its advantages. But sometimes this transaction is not accessible, e.g. if you have problems with the configuration of the Web server, certificates or corresponding roles are missing (although you have the profile SAP_ALL).

A simple workaround is the SAP GUI transaction RSMNG. Here you can (still) reach the old interface for the administration of a data target. This allows us to navigate to your individual DTP requests.

Calling the ADSO administration from Eclipse
Calling ADSO Administration from the Context Menu

If the BW/4HANA cockpit cannot be opened

If the BW/4HANA Cockpit does not open when you click on “Manage the DataStore Object (advanced)” in Eclipse, this may be due to your setting in Eclipse. It did not work for me with the setting “Default system web browser”. With the explicit selection of Chrome a window has opened.

Browsersettings in Eclipse
Setting for the WebBrowser in Eclipse

You should of course get used to the new interface of the BW/4HANA Cockpit in the browser, because SAP may switch off or restrict the “old” GUI transactions. But as a workaround, this is definitely a good interim solution.