1. 21 3月, 2020 1 次提交
    • Cheng Chang's avatar
      Support direct IO in RandomAccessFileReader::MultiRead (#6446) · 4fc21664
      Cheng Chang 创作于
      Summary:
      By supporting direct IO in RandomAccessFileReader::MultiRead, the benefits of parallel IO (IO uring) and direct IO can be combined.
      
      In direct IO mode, read requests are aligned and merged together before being issued to RandomAccessFile::MultiRead, so blocks in the original requests might share the same underlying buffer, the shared buffers are returned in `aligned_bufs`, which is a new parameter of the `MultiRead` API.
      
      For example, suppose alignment requirement for direct IO is 4KB, one request is (offset: 1KB, len: 1KB), another request is (offset: 3KB, len: 1KB), then since they all belong to page (offset: 0, len: 4KB), `MultiRead` only reads the page with direct IO into a buffer on heap, and returns 2 Slices referencing regions in that same buffer. See `random_access_file_reader_test.cc` for more examples.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6446
      
      Test Plan: Added a new test `random_access_file_reader_test.cc`.
      
      Reviewed By: anand1976
      
      Differential Revision: D20097518
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: ca48a8faf9c3af146465c102ef6b266a363e78d1
      4fc21664
  2. 13 3月, 2020 1 次提交
  3. 07 3月, 2020 1 次提交
    • Cheng Chang's avatar
      Remove memcpy from RandomAccessFileReader::Read in direct IO mode (#6455) · 0a0151fb
      Cheng Chang 创作于
      Summary:
      In direct IO mode, RandomAccessFileReader::Read allocates an internal aligned buffer, and then copies the result into the scratch buffer. If the result is only temporarily used inside a function, there is no need to do the memcpy and just let the result Slice refer to the internally allocated buffer.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6455
      
      Test Plan: make check
      
      Differential Revision: D20106753
      
      Pulled By: cheng-chang
      
      fbshipit-source-id: 44f505843837bba47a56e3fa2c4dd3bd76486b58
      0a0151fb
  4. 03 3月, 2020 1 次提交
    • Huisheng Liu's avatar
      return timestamp from get (#6409) · 904a60ff
      Huisheng Liu 创作于
      Summary:
      Added new Get() methods that return timestamp. Dummy implementation is given so that classes derived from DB don't need to be touched to provide their implementation. MultiGet is not included.
      
      ReadRandom perf test (10 minutes) on the same development machine ram drive with the same DB data shows no regression (within marge of error). The test is adapted from https://github.com/facebook/rocksdb/wiki/RocksDB-In-Memory-Workload-Performance-Benchmarks.
          base line (commit 72ee067b):
              101.712 micros/op 314602 ops/sec;   36.0 MB/s (5658999 of 5658999 found)
          This PR:
              100.288 micros/op 319071 ops/sec;   36.5 MB/s (5674999 of 5674999 found)
      
      ./db_bench --db=r:\rocksdb.github --num_levels=6 --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --cache_size=2147483648 --cache_numshardbits=6 --compression_type=none --compression_ratio=1 --min_level_to_compress=-1 --disable_seek_compaction=1 --hard_rate_limit=2 --write_buffer_size=134217728 --max_write_buffer_number=2 --level0_file_num_compaction_trigger=8 --target_file_size_base=134217728 --max_bytes_for_level_base=1073741824 --disable_wal=0 --wal_dir=r:\rocksdb.github\WAL_LOG --sync=0 --verify_checksum=1 --delete_obsolete_files_period_micros=314572800 --max_background_compactions=4 --max_background_flushes=0 --level0_slowdown_writes_trigger=16 --level0_stop_writes_trigger=24 --statistics=0 --stats_per_interval=0 --stats_interval=1048576 --histogram=0 --use_plain_table=1 --open_files=-1 --mmap_read=1 --mmap_write=0 --memtablerep=prefix_hash --bloom_bits=10 --bloom_locality=1 --duration=600 --benchmarks=readrandom --use_existing_db=1 --num=25000000 --threads=32
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6409
      
      Differential Revision: D20200086
      
      Pulled By: riversand963
      
      fbshipit-source-id: 490edd74d924f62bd8ae9c29c2a6bbbb8410ca50
      904a60ff
  5. 21 2月, 2020 1 次提交
    • sdong's avatar
      Replace namespace name "rocksdb" with ROCKSDB_NAMESPACE (#6433) · fdf882de
      sdong 创作于
      Summary:
      When dynamically linking two binaries together, different builds of RocksDB from two sources might cause errors. To provide a tool for user to solve the problem, the RocksDB namespace is changed to a flag which can be overridden in build time.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6433
      
      Test Plan: Build release, all and jtest. Try to build with ROCKSDB_NAMESPACE with another flag.
      
      Differential Revision: D19977691
      
      fbshipit-source-id: aa7f2d0972e1c31d75339ac48478f34f6cfcfb3e
      fdf882de
  6. 08 2月, 2020 1 次提交
    • Levi Tamasi's avatar
      BlobDB: ignore trivially moved files when updating the SST<->blob file mapping (#6381) · 1b4be4ca
      Levi Tamasi 创作于
      Summary:
      BlobDB keeps track of the mapping between SSTs and blob files using
      the `OnFlushCompleted` and `OnCompactionCompleted` callbacks of
      the `EventListener` interface: upon receiving a flush notification, a link
      is added between the newly flushed SST and the corresponding blob file;
      for compactions, links are removed for the inputs and added for the outputs.
      The earlier code performed this link deletion and addition even for
      trivially moved files; the new code walks through the two lists together
      (in a fashion that's similar to merge sort) and skips such files.
      This should mitigate https://github.com/facebook/rocksdb/issues/6338,
      wherein an assertion is triggered with the earlier code when a compaction
      notification for a trivial move precedes the flush notification for the
      moved SST.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6381
      
      Test Plan: make check
      
      Differential Revision: D19773729
      
      Pulled By: ltamasi
      
      fbshipit-source-id: ae0f273ded061110dd9334e8fb99b0d7786650b0
      1b4be4ca
  7. 30 1月, 2020 1 次提交
    • Levi Tamasi's avatar
      Add statistics for BlobDB GC (#6296) · 9e3ace42
      Levi Tamasi 创作于
      Summary:
      The patch adds statistics support to the new BlobDB garbage collection implementation;
      namely, it adds support for the following (pre-existing) tickers:
      
      `BLOB_DB_GC_NUM_FILES`: the number of blob files obsoleted by the GC logic.
      `BLOB_DB_GC_NUM_NEW_FILES`: the number of new blob files generated by the GC logic.
      `BLOB_DB_GC_FAILURES`: the number of failed GC passes (where a GC pass is
      equivalent to a (sub)compaction).
      `BLOB_DB_GC_NUM_KEYS_RELOCATED`: the number of blobs relocated to new blob
      files by the GC logic.
      `BLOB_DB_GC_BYTES_RELOCATED`: the total size of blobs relocated to new blob files.
      
      The tickers `BLOB_DB_GC_NUM_KEYS_OVERWRITTEN`, `BLOB_DB_GC_NUM_KEYS_EXPIRED`,
      `BLOB_DB_GC_BYTES_OVERWRITTEN`, `BLOB_DB_GC_BYTES_EXPIRED`, and
      `BLOB_DB_GC_MICROS` are not relevant for the new GC logic, and are thus marked
      deprecated.
      
      The patch also adds a couple of log messages that log the number and total size of
      blobs encountered and relocated during a GC pass, as well as the number of blob
      files created and obsoleted.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6296
      
      Test Plan: Extended unit tests and used the BlobDB mode of `db_bench`.
      
      Differential Revision: D19402513
      
      Pulled By: ltamasi
      
      fbshipit-source-id: d53d2bfbf4928a1db1e9346c67ebb9007b8932ec
      9e3ace42
  8. 15 1月, 2020 1 次提交
    • Levi Tamasi's avatar
      Remove earlier partial BlobDB GC implementation (#6278) · 1dd7873e
      Levi Tamasi 创作于
      Summary:
      In addition to removing the earlier partially implemented garbage collection
      logic from the BlobDB codebase, the patch also removes the test cases (as well as
      the related sync points, as appropriate) that were only relevant for the old
      implementation, and reworks the remaining ones so they use the new GC logic.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6278
      
      Test Plan: `make check`
      
      Differential Revision: D19335226
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 0cc1794bc9892feda1426ed5522a318f3cb1b692
      1dd7873e
  9. 20 12月, 2019 1 次提交
    • Levi Tamasi's avatar
      BlobDB: only compare CF IDs when checking whether an API call is for the default CF (#6226) · 7a7ca8eb
      Levi Tamasi 创作于
      Summary:
      BlobDB currently only supports using the default column family. The earlier
      code enforces this by comparing the `ColumnFamilyHandle` passed to the
      `Get`/`Put`/etc. call with the handle returned by `DefaultColumnFamily`
      (which, at the end of the day, comes from `DBImpl::default_cf_handle_`).
      Since other `ColumnFamilyHandle`s can also point to the default column
      family, this can reject legitimate requests as well. (As an example,
      with the earlier code, the handle returned by `BlobDB::Open` cannot
      actually be used in API calls.) The patch fixes this by comparing only
      the IDs of the column family handles instead of the pointers themselves.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6226
      
      Test Plan: `make check`
      
      Differential Revision: D19187461
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 54ce2e12ebb1f07e6d1e70e3b1e0213dfa94bda2
      7a7ca8eb
  10. 14 12月, 2019 3 次提交
    • Levi Tamasi's avatar
      Make it possible to enable periodic compactions for BlobDB (#6172) · 0d2172f1
      Levi Tamasi 创作于
      Summary:
      Periodic compactions ensure that even SSTs that do not get picked up
      otherwise eventually go through compaction; used in conjunction with
      BlobDB's garbage collection, they enable BlobDB to reclaim space when
      old blob files are used by such straggling SSTs.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6172
      
      Test Plan: Ran `make check` and used the BlobDB mode of `db_bench`.
      
      Differential Revision: D19045045
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 04636ecc4b6cfe8d495bf656faa65d54a5eb1a93
      0d2172f1
    • anand76's avatar
      Introduce a new storage specific Env API (#5761) · afa2420c
      anand76 创作于
      Summary:
      The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
      
      This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
      
      The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
      
      This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
      
      The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
      
      Differential Revision: D18868376
      
      Pulled By: anand1976
      
      fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
      afa2420c
    • Levi Tamasi's avatar
      Move out valid blobs from the oldest blob files during compaction (#6121) · 583c6953
      Levi Tamasi 创作于
      Summary:
      The patch adds logic that relocates live blobs from the oldest N non-TTL
      blob files as they are encountered during compaction (assuming the BlobDB
      configuration option `enable_garbage_collection` is `true`), where N is defined
      as the number of immutable non-TTL blob files multiplied by the value of
      a new BlobDB configuration option called `garbage_collection_cutoff`.
      (The default value of this parameter is 0.25, that is, by default the valid blobs
      residing in the oldest 25% of immutable non-TTL blob files are relocated.)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6121
      
      Test Plan: Added unit test and tested using the BlobDB mode of `db_bench`.
      
      Differential Revision: D18785357
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 8c21c512a18fba777ec28765c88682bb1a5e694e
      583c6953
  11. 13 12月, 2019 1 次提交
  12. 28 11月, 2019 1 次提交
  13. 27 11月, 2019 2 次提交
    • Levi Tamasi's avatar
      Refactor and clean up the code that reads a blob from a file (#6093) · d9314a92
      Levi Tamasi 创作于
      Summary:
      This patch factors out the logic that reads a (potentially compressed) blob
      from a file into a separate helper method `GetRawBlobFromFile`, and cleans
      up the code a bit. Also, errors during decompression are now logged/propagated
      to the user by returning a `Status` code of `Corruption`.
      
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6093
      
      Test Plan: `make check`
      
      Differential Revision: D18716673
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 44144bc064cab616862d5643f34384f2bae6eb78
      d9314a92
    • Levi Tamasi's avatar
      Refactor blob file creation logic (#6066) · 72daa92d
      Levi Tamasi 创作于
      Summary:
      The patch refactors and cleans up the logic around creating new blob files
      by moving the common code of `SelectBlobFile` and `SelectBlobFileTTL`
      to a new helper method `CreateBlobFileAndWriter`, bringing the implementation
      of `SelectBlobFile` and `SelectBlobFileTTL` into sync, and increasing encapsulation
      by adding new constructors for `BlobFile` and `BlobLogHeader`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6066
      
      Test Plan:
      Ran `make check` and used the BlobDB mode of `db_bench` to sanity test both
      the TTL and the non-TTL code paths.
      
      Differential Revision: D18646921
      
      Pulled By: ltamasi
      
      fbshipit-source-id: e5705a84807932e31dccab4f49b3e64369cea26d
      72daa92d
  14. 19 11月, 2019 1 次提交
    • Levi Tamasi's avatar
      Mark blob files not needed by any memtables/SSTs obsolete (#6032) · 279c4883
      Levi Tamasi 创作于
      Summary:
      The patch adds logic to mark no longer needed blob files obsolete upon database open
      and whenever a flush or compaction completes. Unneeded blob files are detected by
      iterating through live immutable non-TTL blob files starting from the lowest-numbered one,
      and stopping when a blob file used by any SSTs or potentially used by memtables is found.
      (The latter is determined by comparing the sequence number at which the blob file
      became immutable with the largest sequence number received in flush notifications.)
      
      In addition, the patch cleans up the logic around closing and obsoleting blob files and
      enforces invariants around this area (blob files are now guaranteed to go through the
      stages mutable-non-obsolete, immutable-non-obsolete, and immutable-obsolete in this
      order).
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6032
      
      Test Plan: Extended unit tests and tested using the BlobDB mode of `db_bench`.
      
      Differential Revision: D18495610
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 11825b84af74f3f4abfd9bcae04e80870ae58961
      279c4883
  15. 12 11月, 2019 1 次提交
    • Levi Tamasi's avatar
      BlobDB: Maintain mapping between blob files and SSTs (#6020) · 8e7aa628
      Levi Tamasi 创作于
      Summary:
      The patch adds logic to BlobDB to maintain the mapping between blob files
      and SSTs for which the blob file in question is the oldest blob file referenced
      by the SST file. The mapping is initialized during database open based on the
      information retrieved using `GetLiveFilesMetaData`, and updated after
      flushes/compactions based on the information received through the `EventListener`
      interface (or, in the case of manual compactions issued through the `CompactFiles`
      API, the `CompactionJobInfo` object).
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6020
      
      Test Plan: Added a unit test; also tested using the BlobDB mode of `db_bench`.
      
      Differential Revision: D18410508
      
      Pulled By: ltamasi
      
      fbshipit-source-id: dd9e778af781cfdb0d7056298c54ba9cebdd54a5
      8e7aa628
  16. 30 10月, 2019 1 次提交
    • Sagar Vemuri's avatar
      Auto enable Periodic Compactions if a Compaction Filter is used (#5865) · 4c9aa30a
      Sagar Vemuri 创作于
      Summary:
      - Periodic compactions are auto-enabled if a compaction filter or a compaction filter factory is set, in Level Compaction.
      - The default value of `periodic_compaction_seconds` is changed to UINT64_MAX, which lets RocksDB auto-tune periodic compactions as needed. An explicit value of 0 will still work as before ie. to disable periodic compactions completely. For now, on seeing a compaction filter along with a UINT64_MAX value for `periodic_compaction_seconds`, RocksDB will make SST files older than 30 days to go through periodic copmactions.
      
      Some RocksDB users make use of compaction filters to control when their data can be deleted, usually with a custom TTL logic. But it is occasionally possible that the compactions get delayed by considerable time due to factors like low writes to a key range, data reaching bottom level, etc before the TTL expiry. Periodic Compactions feature was originally built to help such cases. Now periodic compactions are auto enabled by default when compaction filters or compaction filter factories are used, as it is generally helpful to all cases to collect garbage.
      
      `periodic_compaction_seconds` is set to a large value, 30 days, in `SanitizeOptions` when RocksDB sees that a `compaction_filter` or `compaction_filter_factory` is used.
      
      This is done only for Level Compaction style.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5865
      
      Test Plan:
      - Added a new test `DBCompactionTest.LevelPeriodicCompactionWithCompactionFilters` to make sure that `periodic_compaction_seconds` is set if either `compaction_filter` or `compaction_filter_factory` options are set.
      - `COMPILE_WITH_ASAN=1 make check`
      
      Differential Revision: D17659180
      
      Pulled By: sagar0
      
      fbshipit-source-id: 4887b9cf2e53cf2dc93a7b658c6b15e1181217ee
      4c9aa30a
  17. 15 10月, 2019 1 次提交
  18. 17 9月, 2019 1 次提交
    • sdong's avatar
      Divide file_reader_writer.h and .cc (#5803) · b931f84e
      sdong 创作于
      Summary:
      file_reader_writer.h and .cc contain several files and helper function, and it's hard to navigate. Separate it to multiple files and put them under file/
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5803
      
      Test Plan: Build whole project using make and cmake.
      
      Differential Revision: D17374550
      
      fbshipit-source-id: 10efca907721e7a78ed25bbf74dc5410dea05987
      b931f84e
  19. 28 8月, 2019 1 次提交
    • Pratik Dhandharia's avatar
      replace some reinterpret_cast with static_cast_with_check (#5740) · 1b4c104a
      Pratik Dhandharia 创作于
      Summary:
      This PR focuses on replacing some of the reinterpret_cast<DBImpl*> to static_cast_with_check<DBImpl, DB>.
      
      Files impacted:
      
      ./db/db_impl/db_impl_compaction_flush.cc
      ./db/write_batch.cc
      ./utilities/blob_db/blob_db_impl.cc
      ./utilities/transactions/pessimistic_transaction_db.cc
      ./utilities/transactions/transaction_base.cc
      ./utilities/transactions/write_prepared_txn_db.cc
      ./utilities/transactions/write_unprepared_txn_db.cc
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5740
      
      Differential Revision: D17055691
      
      Pulled By: pdhandharia
      
      fbshipit-source-id: 0f8034d1b32eade56e37d59c04b7bf236a81d8e8
      1b4c104a
  20. 15 8月, 2019 1 次提交
    • Levi Tamasi's avatar
      Fix data races in BlobDB (#5698) · 0a97125e
      Levi Tamasi 创作于
      Summary:
      Some accesses to blob_files_ and open_ttl_files_ in BlobDBImpl, as well
      as to expiration_range_ in BlobFile were not properly synchronized.
      The patch fixes this and also makes sure the invariant that obsolete_files_
      is a subset of blob_files_ holds even when an attempt to delete an obsolete
      blob file fails.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5698
      
      Test Plan:
      COMPILE_WITH_TSAN=1 make blob_db_test
      gtest-parallel --repeat=1000 ./blob_db_test --gtest_filter="*ShutdownWait*"
      
      The test fails with TSAN errors ~20 times out of 1000 without the patch but
      completes successfully 1000 out of 1000 times with the fix.
      
      Differential Revision: D16793235
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 8034b987598d4fdc9f15098d4589cc49cde484e9
      0a97125e
  21. 07 8月, 2019 1 次提交
    • Vijay Nadimpalli's avatar
      New API to get all merge operands for a Key (#5604) · d150e014
      Vijay Nadimpalli 创作于
      Summary:
      This is a new API added to db.h to allow for fetching all merge operands associated with a Key. The main motivation for this API is to support use cases where doing a full online merge is not necessary as it is performance sensitive. Example use-cases:
      1. Update subset of columns and read subset of columns -
      Imagine a SQL Table, a row is encoded as a K/V pair (as it is done in MyRocks). If there are many columns and users only updated one of them, we can use merge operator to reduce write amplification. While users only read one or two columns in the read query, this feature can avoid a full merging of the whole row, and save some CPU.
      2. Updating very few attributes in a value which is a JSON-like document -
      Updating one attribute can be done efficiently using merge operator, while reading back one attribute can be done more efficiently if we don't need to do a full merge.
      ----------------------------------------------------------------------------------------------------
      API :
      Status GetMergeOperands(
            const ReadOptions& options, ColumnFamilyHandle* column_family,
            const Slice& key, PinnableSlice* merge_operands,
            GetMergeOperandsOptions* get_merge_operands_options,
            int* number_of_operands)
      
      Example usage :
      int size = 100;
      int number_of_operands = 0;
      std::vector<PinnableSlice> values(size);
      GetMergeOperandsOptions merge_operands_info;
      db_->GetMergeOperands(ReadOptions(), db_->DefaultColumnFamily(), "k1", values.data(), merge_operands_info, &number_of_operands);
      
      Description :
      Returns all the merge operands corresponding to the key. If the number of merge operands in DB is greater than merge_operands_options.expected_max_number_of_operands no merge operands are returned and status is Incomplete. Merge operands returned are in the order of insertion.
      merge_operands-> Points to an array of at-least merge_operands_options.expected_max_number_of_operands and the caller is responsible for allocating it. If the status returned is Incomplete then number_of_operands will contain the total number of merge operands found in DB for key.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5604
      
      Test Plan:
      Added unit test and perf test in db_bench that can be run using the command:
      ./db_bench -benchmarks=getmergeoperands --merge_operator=sortlist
      
      Differential Revision: D16657366
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: 0faadd752351745224ee12d4ae9ef3cb529951bf
      d150e014
  22. 07 7月, 2019 1 次提交
  23. 12 6月, 2019 1 次提交
  24. 01 6月, 2019 2 次提交
  25. 31 5月, 2019 3 次提交
  26. 30 5月, 2019 1 次提交
  27. 05 4月, 2019 1 次提交
    • Adam Simpkins's avatar
      Fix many bugs in log statement arguments (#5089) · c06c4c01
      Adam Simpkins 创作于
      Summary:
      Annotate all of the logging functions to inform the compiler that these
      use printf-style formatting arguments.  This allows the compiler to emit
      warnings if the format arguments are incorrect.
      
      This also fixes many problems reported now that format string checking
      is enabled.  Many of these are simply mix-ups in the argument type (e.g,
      int vs uint64_t), but in several cases the wrong number of arguments
      were being passed in which can cause the code to crash.
      
      The primary motivation for this was to fix the log message in
      `DBImpl::SwitchMemtable()` which caused a segfault due to an extra %s
      format parameter with no argument supplied.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5089
      
      Differential Revision: D14574795
      
      Pulled By: simpkins
      
      fbshipit-source-id: 0921b03f0743652bf4ae21e414ff54b3bb65422a
      c06c4c01
  28. 27 3月, 2019 1 次提交
  29. 19 3月, 2019 1 次提交
    • Shobhit Dayal's avatar
      Feature for sampling and reporting compressibility (#4842) · b45b1cde
      Shobhit Dayal 创作于
      Summary:
      This is a feature to sample data-block compressibility and and report them as stats. 1 in N (tunable) blocks is sampled for compressibility using two algorithms:
      1. lz4 or snappy for fast compression
      2. zstd or zlib for slow but higher compression.
      
      The stats are reported to the caller as raw-bytes and compressed-bytes. The block continues to be compressed for storage using the specified CompressionType.
      
      The db_bench_tool how has a command line option for specifying the sampling rate. It's default value is 0 (no sampling). To test the overhead for a certain value, users can compare the performance of db_bench_tool, varying the sampling rate. It is unlikely to have a noticeable impact for high values like 20.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4842
      
      Differential Revision: D13629011
      
      Pulled By: shobhitdayal
      
      fbshipit-source-id: 14ca668bcab6499b2a1734edf848eb62a4f4fafa
      b45b1cde
  30. 08 3月, 2019 1 次提交
  31. 01 3月, 2019 1 次提交
    • Siying Dong's avatar
      Add two more StatsLevel (#5027) · 5e298f86
      Siying Dong 创作于
      Summary:
      Statistics cost too much CPU for some use cases. Add two stats levels
      so that people can choose to skip two types of expensive stats, timers and
      histograms.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5027
      
      Differential Revision: D14252765
      
      Pulled By: siying
      
      fbshipit-source-id: 75ecec9eaa44c06118229df4f80c366115346592
      5e298f86
  32. 22 2月, 2019 1 次提交
  33. 15 2月, 2019 1 次提交
    • Michael Liu's avatar
      Apply modernize-use-override (2nd iteration) · ca89ac2b
      Michael Liu 创作于
      Summary:
      Use C++11’s override and remove virtual where applicable.
      Change are automatically generated.
      
      Reviewed By: Orvid
      
      Differential Revision: D14090024
      
      fbshipit-source-id: 1e9432e87d2657e1ff0028e15370a85d1739ba2a
      ca89ac2b
  34. 30 1月, 2019 1 次提交