featured_image

Arrays: The Complete List

Across projects and platforms—from quick scripts to systems programming—the way you organize sequential data shapes readability, performance, and maintenance. Small choices about containers and their behavior show up as big differences in latency, memory use, and developer ergonomics.

There are 24 Arrays, ranging from ArrayList,std::vector to other implementations; each entry is organized with Category,Memory/time (typical),Where used so you can compare trade-offs at a glance, and you’ll find below.

How do I choose between ArrayList and std::vector?

Pick based on language and guarantees: std::vector is the go-to in C++ for contiguous storage, predictable iteration speed, and low-overhead resizing; ArrayList is a common Java choice with similar semantics but different allocation and concurrency characteristics. Consider element type, required performance for random access vs. insertions, and whether you need language-specific features (like move semantics in C++).

What should I read first in the Category,Memory/time (typical),Where used table?

Start with Category to narrow purpose (dynamic array, ring buffer, sparse array), then check Memory/time (typical) for expected space and operation costs, and use Where used to match real-world patterns—this order helps you filter feasible options before benchmarking for your workload.

Arrays

Name Category Memory/time (typical) Where used
Static array static contiguous memory; O(1) indexed access; fixed size allocation C, embedded systems, low-level code
Dynamic array dynamic contiguous; amortized O(1) append; O(1) random access; occasional O(n) resize std::vector, ArrayList, Python list, many languages
C array language-specific contiguous memory; O(1) access; stack or static allocation common C, microcontrollers, system libraries
std::array language-specific contiguous; O(1) access; fixed size at compile-time C++ STL, templates, performance code
std::vector language-specific contiguous; O(1) random access; amortized O(1) append; occasional O(n) reallocation C++ STL, game engines, system code
Java array language-specific contiguous for primitives; contiguous refs for objects; O(1) access; runtime bounds checks Java JVM, Android apps, libraries
ArrayList language-specific contiguous; amortized O(1) append; O(1) access; occasional O(n) resize Java standard library, Android code
Python list language-specific contiguous array of pointers; O(1) access; amortized O(1) append Python core, scripts, many libraries
array.array typed/GPU contiguous typed memory; O(1) access; low memory per element Python stdlib, binary I/O, small numeric tasks
NumPy ndarray multidimensional contiguous or strided; O(1) element access; vectorized operations for bulk work NumPy, scientific computing, ML
NumPy memmap memory-mapped memory-mapped file-backed; page-faulted access; O(1) indexing per element NumPy, large dataset processing, out-of-core tasks
Memory-mapped array memory-mapped file-backed memory mapping; OS paging; possibly non-contiguous view; O(1) element access databases, file I/O, OS-level services
Jagged array multidimensional arrays of arrays; non-contiguous rows; O(1) row access; variable row lengths C#, Java, languages with nested arrays
Row-major array multidimensional contiguous row-major layout; good row-wise locality; O(1) indexing C, C++, NumPy (C-order), many libraries
Column-major array multidimensional contiguous column-major layout; good column-wise locality; O(1) indexing Fortran, MATLAB, NumPy (Fortran-order), numerical libs
CSR sparse compact nonzero storage with row pointers; efficient row operations; O(nnz) storage SciPy, Eigen, ML libs, sparse linear algebra
CSC sparse compact nonzero storage with column pointers; efficient column ops; O(nnz) storage SciPy, sparse solvers, graph algorithms
COO sparse coordinate triples; simple insertion; less efficient arithmetic SciPy, sparse conversions, incremental assembly
Bitset bitset packed bits in words; extremely compact; O(1) bit ops via masks std::bitset, bitmap indexes, compression, bloom filters
Dynamic bitset bitset resizable packed bits; amortized O(1) resize; O(1) bit ops boost::dynamic_bitset, Python bitarray, search indexes
Circular buffer circular fixed-size ring in contiguous buffer; O(1) enqueue/dequeue; constant-time head/tail ops embedded systems, producers/consumers, audio I/O
CUDA device array typed/GPU contiguous device memory; high bandwidth; kernel access latency and host-device transfer cost CUDA, cuBLAS, deep learning frameworks
TypedArray typed/GPU contiguous typed binary buffers; O(1) access; ArrayBuffer views Web APIs, WebGL, Node.js, binary I/O
Structure of Arrays (SoA) typed/GPU separate contiguous arrays per field; improves SIMD/vectorization and cache game engines, data-parallel code, GPU compute

Images and Descriptions

Static array

Static array

Fixed-size contiguous block of elements allocated at compile- or runtime. Offers constant-time indexed access and minimal overhead but cannot grow. Common in low-level systems, embedded code, and performance-critical loops where predictable memory layout and cache behavior matter.

Dynamic array

Dynamic array

Resizable contiguous array that grows by reallocating a larger buffer (usually geometric growth). Provides O(1) random access and amortized O(1) append, with occasional O(n) copies during resize. Ideal for generic collections needing fast indexing and dynamic size.

C array

C array

C’s built-in fixed-size arrays are simple contiguous memory regions with direct pointer arithmetic and O(1) indexing. They can be allocated on stack or static data; lack bounds checking and automatic resizing, making safety and lifetime management the programmer’s responsibility.

std::array

std::array

std::array is a fixed-size wrapper around a C-style array providing STL container semantics. It stores elements contiguously with no heap allocation, guaranteeing O(1) indexing and better compile-time safety compared with raw arrays.

std::vector

std::vector

std::vector is the standard resizable contiguous sequence type in C++. It provides dynamic growth with amortized O(1) push back, constant-time indexing, and strong cache locality. Offers iterator semantics, exception safety options, and manual capacity control via reserve.

Java array

Java array

Java arrays are fixed-size and store either primitives contiguously or references contiguously with separate heap objects. They offer O(1) indexing, runtime bounds checks, and predictable memory layout for primitives, commonly used for performance-sensitive loops and native interop.

ArrayList

ArrayList

ArrayList is Java’s resizable array-backed list. It encapsulates a contiguous Object[] buffer that grows by reallocation, delivering O(1) indexing and amortized O(1) appends, with overhead from boxing primitives and concurrent modifications requiring external synchronization.

Python list

Python list

Python list is a dynamic array of pointers to PyObject. It provides O(1) indexing and amortized O(1) append, with growth strategies tuned for typical workloads. Because elements are references, memory overhead and indirection impacts performance compared with typed arrays.

array.array

array.array

array.array is a compact typed array of C-style numeric values in Python. It stores values contiguously with minimal overhead, enabling efficient binary I/O and lower memory footprint than lists while retaining Python-level APIs for small numeric sequences.

NumPy ndarray

NumPy ndarray

NumPy’s ndarray is a typed, N-dimensional array supporting contiguous or strided memory layouts and vectorized operations. It offers efficient bulk computations, slicing views without copies, and explicit memory order control (row- or column-major) for scientific and machine-learning workloads.

NumPy memmap

NumPy memmap

NumPy memmap maps large array data to disk files allowing array access without loading entire dataset into RAM. Access pages in-demand, enabling processing of datasets larger than memory with near-memory-speed access patterns subject to OS paging behavior.

Memory-mapped array

Memory-mapped array

Memory-mapped arrays map file regions into virtual memory, giving programs direct byte-wise access as if in-RAM. They reduce copy overhead and simplify I/O, but performance depends on OS page faults and alignment; useful for large datasets, IPC, and persistence.

Jagged array

Jagged array

Jagged arrays are arrays whose elements are arrays, allowing rows of varying lengths and independent memory allocation. They save space for irregular data, enable per-row operations, and differ from rectangular multidimensional arrays in memory layout and cache behavior.

Row-major array

Row-major array

Row-major arrays store multi-dimensional data with rows contiguous in memory, optimizing row-wise iteration and locality. Common in C and C-order libraries; choice of layout affects cache performance and interop with libraries expecting different memory orders.

Column-major array

Column-major array

Column-major layout stores columns contiguously, improving column-wise access patterns and matrix operations optimized for that order. Used in Fortran, MATLAB, and some numerical libraries. Aligning algorithm access patterns with layout improves cache efficiency and BLAS interoperability.

CSR

CSR

CSR stores nonzero values and column indices with row pointers, ideal for sparse matrices with efficient row access and matrix-vector products. It dramatically reduces memory for sparse data but has overhead for insertions and irregular pattern updates.

CSC

CSC

CSC is like CSR but organizes data by columns with column pointers and row indices. It excels at column-oriented operations and solving linear systems, while insertions and random updates remain expensive compared with dense arrays.

COO

COO

COO format stores explicit (row, column, value) tuples for each nonzero entry, making construction and incremental assembly simple. It is conversion-friendly and flexible but less efficient for arithmetic or repeated access than CSR/CSC.

Bitset

Bitset

Fixed-size bitsets pack boolean flags as bits in contiguous words, offering very low memory per element and fast bitwise bulk operations. Good for flags, bitmap indices, and set operations; lack dynamic resizing but have excellent cache and vectorized instruction opportunities.

Dynamic bitset

Dynamic bitset

Dynamic bitsets provide resizable packed-bit arrays, combining compact memory with operations like rank/select and bitwise arithmetic. They support dynamic growth and efficient bulk operations, useful in indexing, compression, and algorithms needing compact boolean vectors.

Circular buffer

Circular buffer

A fixed-size circular buffer reuses a contiguous buffer for FIFO queues with head and tail indices. It provides constant-time enqueue and dequeue without allocations, ideal for streaming, real-time audio, and inter-thread communication with minimal overhead.

CUDA device array

CUDA device array

CUDA device arrays reside in GPU memory and are optimized for parallel kernel access and high-bandwidth transfers. They are contiguous, typed buffers used by GPU kernels and libraries for linear algebra and deep learning; host-device transfers and alignment affect performance.

TypedArray

TypedArray

TypedArray family provides contiguous binary buffers (ArrayBuffer views) with specific element types like Uint8Array or Float32Array. They enable efficient binary data handling, WebGL buffers, and low-level I/O in JavaScript with predictable memory layout and no per-element objects.

Structure of Arrays (SoA)

Structure of Arrays (SoA)

SoA arranges each field of a structure into its own contiguous array, improving SIMD and cache-friendly vectorized processing compared with Array-of-Structures. It boosts throughput for data-parallel workloads and can simplify GPU/massively-parallel memory access patterns.