Fix more links
This commit is contained in:
parent
154573c1bb
commit
2061fc6dbb
|
@ -24,7 +24,7 @@ Let's consider an **AP** database. In such a database, reads and writes would al
|
|||
|
||||
However, the downside is stark. Imagine a simple distributed database consisting of two nodes and a network partition making them unable to communicate. To be Available, each of the two nodes must continue to accept writes from clients.
|
||||
|
||||
.. figure:: /images/AP_Partition.png
|
||||
.. figure:: images/AP_Partition.png
|
||||
|
||||
Data divergence in an AP system during partition
|
||||
|
||||
|
@ -62,7 +62,7 @@ Imagine that a rack-top switch fails, and A is partitioned from the network. A w
|
|||
|
||||
However, for all other clients, the database servers can reach a majority of coordination servers, B and C. The replication configuration has ensured there is a full copy of the data available even without A. For these clients, the database will remain available for reads and writes and the web servers will continue to serve traffic.
|
||||
|
||||
.. figure:: /images/FDB_Partition.png
|
||||
.. figure:: images/FDB_Partition.png
|
||||
|
||||
Maintenance of availability during partition
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ Scaling
|
|||
|
||||
FoundationDB scales linearly with the number of cores in a cluster over a wide range of sizes.
|
||||
|
||||
.. image:: /images/scaling.png
|
||||
.. image:: images/scaling.png
|
||||
|
||||
Here, a cluster of commodity hardware scales to **8.2 million** operations/sec doing a 90% read and 10% write workload with 16 byte keys and values between 8 and 100 bytes.
|
||||
|
||||
|
@ -24,7 +24,7 @@ Latency
|
|||
|
||||
FoundationDB has low latencies over a broad range of workloads that only increase modestly as the cluster approaches saturation.
|
||||
|
||||
.. image:: /images/latency.png
|
||||
.. image:: images/latency.png
|
||||
|
||||
When run at less than **75% load**, FoundationDB typically has the following latencies:
|
||||
|
||||
|
@ -53,7 +53,7 @@ Throughput (per core)
|
|||
|
||||
FoundationDB provides good throughput for the full range of read and write workloads, with two fully durable storage engine options.
|
||||
|
||||
.. image:: /images/throughput.png
|
||||
.. image:: images/throughput.png
|
||||
|
||||
FoundationDB offers two :ref:`storage engines <configuration-storage-engine>`, optimized for distinct use cases, both of which write to disk before reporting transactions committed. For each storage engine, the graph shows throughput of a single FoundationDB process running on a **single core** with saturating read/write workloads ranging from 100% reads to 100% writes, all with 16 byte keys and values between 8 and 100 bytes. Throughput for the unmixed workloads is about:
|
||||
|
||||
|
@ -79,7 +79,7 @@ Concurrency
|
|||
|
||||
FoundationDB is designed to achieve great performance under high concurrency from a large number of clients.
|
||||
|
||||
.. image:: /images/concurrency.png
|
||||
.. image:: images/concurrency.png
|
||||
|
||||
Its asynchronous design allows it to handle very high concurrency, and for a typical workload with 90% reads and 10% writes, maximum throughput is reached at about 200 concurrent operations. This number of operations was achieved with **20** concurrent transactions per FoundationDB process each running 10 operations with 16 byte keys and values between 8 and 100 bytes.
|
||||
|
||||
|
|
Loading…
Reference in New Issue