MEMORY tables have the following characteristics:
- Space for
MEMORYtables is allocated in small blocks. Tables use 100% dynamic hashing for inserts. No overflow area or extra key space is needed. No extra space is needed for free lists. Deleted rows are put in a linked list and are reused when you insert new data into the table.
MEMORYtables also have none of the problems commonly associated with deletes plus inserts in hashed tables.
MEMORYtables use a fixed-length row-storage format. Variable-length types such as
VARCHARare stored using a fixed length.
MEMORYtables cannot contain
MEMORYincludes support for
MEMORYtables are shared among all clients, just like any other non-
MEMORY performance is constrained by contention resulting from single-thread execution and table lock overhead when processing updates. This limits scalability when load increases, particularly for statement mixes that include writes.
Despite the in-memory processing for
MEMORY tables, they are not necessarily faster than
InnoDB tables on a busy server, for general-purpose queries, or under a read/write workload. In particular, the table locking involved with performing updates can slow down concurrent usage of
MEMORY tables from multiple sessions.
Depending on the kinds of queries performed on a
MEMORY table, you might create indexes as either the default hash data structure (for looking up single values based on a unique key), or a general-purpose B-tree data structure (for all kinds of queries involving equality, inequality, or range operators such as less than or greater than). The following sections illustrate the syntax for creating both kinds of indexes. A common performance issue is using the default hash indexes in workloads where B-tree indexes are more efficient.
CREATE TABLE t (i INT) ENGINE = MEMORY;
The maximum size of
MEMORY tables is limited by the
max_heap_table_size system variable, which has a default value of 16MB.
Why not to user memory-storage-engine.html –
No transaction or FK support, meaning you will have to manage transactional integrity and referential integrity in your own code were needed (which could end up being a lot less efficient than letting the DB do this for you, though that very much depends on your app’s expected behaviour patterns).
Table level locking only: this could be a significant barrier to scalability if your app needs multiple concurrent writers to the same set of tables or in cases where your read operations use locks to ensure consistent data is read – in such cases a disk based table that supports much finer lock granularity will perform far better if enough of its content is currently cached in RAM.