Practical Course: Database Implementation
Content
In this practical course you will implement a main-memory database system from scratch.
Prerequisites
- lecture Fundamentals of Databases (IN0008) or similar course
- systems programming experience in C/C++
- beneficial: lecture Database Systems on Modern CPU Architectures (IN2118)
Organization
- Our weekly meetings will be every Wednesday at 3pm in room 02.11.058. The first meeting is on 14 October.
- The course will be in English.
General Notes
Please note the following suggestions for your project: Notes on Programming Assignments. Here is a summary:- Use a build system, preferably make. Make is fairly easy to get started with, you find a simple tutorial e.g. here.
- Use a source code control system, e.g. Git. You can find an interactive tutorial e.g. here
- Use a memory checker to find (weird) bugs in your program, we recommend Valgrind
- If you prefer to use a (graphical) IDE, we can recommend Eclipse with CDT or KDevelop
- Use a debugger, e.g. gdb. Many IDEs have debugging features built in, e.g. Eclipse
- A nice tool to check the generated assembly instructions is this Interactive compiler
- If you use GCC (g++) as a compiler, here are some important compiler flags/options that may help you to find bugs:
- -g generate debug symbols for better debugging
- -O0 disable optimizations to allow for more reliable debugging, use -O3 instead when you measure performance
- -Wall generate helpful warnings
- -std=c++11 to enable C++11
- profiling with perf:
- profiling helpers
- record where time is spent: perf record -e cycles:pp -p PID (PID is the process id of the process you want to benchmark)
- show report of recorded events: perf report
- sample call graphs not just locations: perf record -e cycles:pp -p `pidof BINARYNAME` --call-graph=dwarf (perf record will show cumulative results)
- flame graphs
- count number of cache misses: perf stat -e cache-misses -p PID
- list of available events: perf list
- monitor CPU-specific perf events (not listed in perf list)
- find event for your CPU in Intel Software Developer's Manual Volume 3 Chapter 19
- you need the "Event Num." (e.g. D2) and "Umask Value" (e.g. 01)
- the perf event name starts with r, then concatenate umask and event (umask first!): perf record -e r01D2
- perf tutorial
Task 1
Implement the TPC-C "New Order" transaction and storage for all relations as a standalone C++ program.- Write a load procedure to populate the database with the data provided in the *.tbl files.
- Translate the "New Order" (neworder.script) transaction to C++ using the following mapping from SQL to C data types:
- integer: int32_t
- char,varchar: length plus character array (do not use pointers or std::string)
- timestamp: uint64_t (seconds since 01.01.1970)
- numeric(x,y): int64_t (x is the total number of digits, y is number of digits after the decimal separator, for example numeric(10,2) x=12.3 is represented as 1230)
- Test your implementation by translating newOrderRandom (from oltp.script) to C++ and executing it 1 million times; how many newOrder transactions per second can your implementation execute?
- Before and after executing the transactions print the number of tuples in the order, neworder, and orderline tables
Task 2
- Write a simple parser for SQL schema files. It should handle the TPC-C schema file. You can assume that all attributes are declared NOT NULL, and you must support additional non-primary (non-unique) key indexes. This code may be helpful, but it does not support all needed features.
- Write a generator that generates C++ code from the schema. It must be possible to store, insert, delete, access attributes, and find (using the primary key) tuples.
- (Manually) re-implement the newOrder transaction using the generated code.
- Add the delivery transaction to your existing TPC-C implementation.
- Execute 1 million transactions composed of a mix of newOrder (90%) and delivery (10%) transactions (see oltp.cpp); measure your performance. Send your solution to Viktor Leis until 28 Oct 2015, 10am.
Task 3
- Implement the following SQL query as a C++ program using STL data structures for the join implementation. Do not use any existing index structures (primary key indexes):
select sum(ol_quantity*ol_amount-c_balance*o_ol_cnt) from customer, "order", orderline where o_w_id = c_w_id and o_d_id = c_d_id and o_c_id = c_id and o_w_id = ol_w_id and o_d_id = ol_d_id and o_id = ol_o_id and c_last like 'B%'
- Run the query 10 times in sequence. For each run, print the result of the query (should be 1367872001.25 initially) and measure the execution times.
- Execute 1 million transactions (the newOrder and delivery mix). Using the fork system call, run the query concurrently. Whenever a query is finished, create a new snapshot using fork, so that exactly one snapshot and query is active at any given time. This example for using fork may be helpful.
- Measure how many transactions per second you can process using this snapshotting model, how long the queries take on average, and how long the fork takes.
Task 4
- Implement query compilation of relational algebra trees to C++ code. Use the produce/consume (push) model. You need to support the table scan, selection, print, and inner join operators.
- Manually build an algebra tree for the following SQL query:
select c_first,c_last,o_all_local,ol_amount from customer, "order", orderline where o_w_id = c_w_id and o_d_id = c_d_id and o_c_id = c_id and o_w_id = ol_w_id and o_d_id = ol_d_id and o_id = ol_o_id and c_id = 322 and c_w_id = 1 and c_d_id = 1
- Compile the expression to C++ code and execute the resulting code (including a print operator at the root).
- For debugging the generated code it may be useful to indent it: clang-format
Task 5
- Implement a simple parser for the following subset of SQL:
- The select clause contains one or more attribute names.
- The from clause contains one or more relation names.
- The where clause is optional and can contain one or more selections (connected by "and"). You only need to support selections of the form "attribute = attribute" and "attribute = constant".
- You can assume that each relation is referenced only once, and that all attribute names are unique.
- Implement semantic analysis, which takes the parsed SQL statement and the schema as input and computes an algebra tree that can be used as input to your code generator. Report errors when non-existing attributes or tables are referenced. Also, you should report an error if a table has no join condition (i.e., a cross product would be necessary). Build left-deep join trees based on the order of the relations in the from clause.
-
Implement a "read-eval-print-loop" for your database system:
- read one line from standard input
- analyze it (report any errors and abort if necessary)
- generate c++ code and store it in a temporary file
- call a c++ compiler and generate a temporary shared library (.so file)
- using dlopen, dlsym, and dlclose you can now load the shared library and execute the query in it (have a look at the example), you man need to compile with -fPIC and -rdynamic
- Measure and report the compilation and the execution times separately.
- hint: have a look at rlwrap tool
-
Test the following queries:
select w_id from warehouse; select c_id, c_first, c_middle, c_last from customer where c_last = 'BARBARBAR'; select d_name from district, order where d_w_id = o_w_id and o_d_id = d_id and o_id = 7; select o_id, ol_dist_info from order, orderline where o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100; select o_id, ol_dist_info from orderline, order where o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100; select c_last, o_id, i_id, ol_dist_info from customer, order, orderline, item where c_id = o_c_id and c_d_id = o_d_id and c_w_id = o_w_id and o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100 and ol_i_id = i_id; select c_last, o_id, ol_dist_info from customer, order, orderline where c_id = o_c_id and o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100;
Task 6
- Create a new query engine similar to Task 5 that generates parallel code for multi-core CPUs using the Intel TBB library. Use the tbb::parallel_for construct to parallelize the table scans and tbb::concurrent_unordered_map for the join hash tables.
- Compare the performance of the old, single-threaded engine with the new engine using a single thread (use tbb:task_scheduler_init init(tr); to restrict the number of cores used by TBB)
- Compare the performance of your new engine using either a single thread or multiple threads.
Related Papers
- Motivation for main-memory database systems: OLTP Through the Looking Glass, and What We Found There
- Snapshots using fork: HyPer - Hybrid OLTP&OLAP High Performance Database System
- Query compilation/code generation: Efficiently Compiling Efficient Query Plans for Modern Hardware
- Parallel query evaluation: Morsel-Driven Parallelism: A NUMA-Aware Query Evaluation Framework for the Many-Core Age
Project Topics
- Fast scans:
- Bw-Tree/Masstree:
- Hardware Transactional Memory/Synchronization:
- Compression:
- Column-Store Query Engines:
- Cracking:
- Transaction Processing (Hekaton):
- Transaction Processing (Silo):
- Hash joins: