1. 05 May, 2020 3 commits
    • Levi Tamasi's avatar
      Expose the set of live blob files from Version/VersionSet (#6785) · a00ddf15
      Levi Tamasi authored
      The patch adds logic that returns the set of live blob files from
      `Version::AddLiveFiles` and `VersionSet::AddLiveFiles` (in addition to
      live table files), and also cleans up the code a bit, for example, by
      exposing only the numbers of table files as opposed to the earlier
      `FileDescriptor`s that no clients used. Moreover, the patch extends
      the `GetLiveFiles` API so that it also exposes blob files in the current version.
      Similarly to https://github.com/facebook/rocksdb/pull/6755,
      this is a building block for identifying and purging obsolete blob files.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6785
      Test Plan: `make check`
      Reviewed By: riversand963
      Differential Revision: D21336210
      Pulled By: ltamasi
      fbshipit-source-id: fc1aede8a49eacd03caafbc5f6f9ce43b6270821
    • sdong's avatar
      Avoid Swallowing Some File Consistency Checking Bugs (#6793) · 680c4163
      sdong authored
      We are swallowing some file consistency checking failures. This is not expected. We are fixing two cases: DB reopen and manifest dump.
      More places are not fixed and need follow-up.
      Error from CheckConsistencyForDeletes() is also swallowed, which is not fixed in this PR.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6793
      Test Plan: Add a unit test to cover the reopen case.
      Reviewed By: riversand963
      Differential Revision: D21366525
      fbshipit-source-id: eb438a322237814e8d5125f916a3c6de97f39ded
    • Mian Qin's avatar
      Fix issues for reproducing synthetic ZippyDB workloads in the FAST20' paper (#6795) · d9e170d8
      Mian Qin authored
      Fix issues for reproducing synthetic ZippyDB workloads in the FAST20' paper using db_bench. Details changes as follows.
      1, add a separate random mode in MixGraph to produce all_random workload.
      2, fix power inverse function for generating prefix_dist workload.
      3, make sure key_offset in prefix mode is always unsigned.
      note: Need to carefully choose key_dist_a/b to avoid aliasing. Power inverse function range should be close to overall key space.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6795
      Reviewed By: akankshamahajan15
      Differential Revision: D21371095
      Pulled By: zhichao-cao
      fbshipit-source-id: 80744381e242392c8c7cf8ac3d68fe67fe876048
  2. 02 May, 2020 1 commit
  3. 01 May, 2020 11 commits
    • Zhichao Cao's avatar
      Fix multiple CF replay failure in db_bench replay (#6787) · c8643edf
      Zhichao Cao authored
      The multiple CF hash map is not passed to the multi-thread worker. When using multi-thread replay for multiple CFs, it will cause segment fault. Pass the cf_map to the argument.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6787
      Test Plan: pass trace replay test.
      Reviewed By: yhchiang
      Differential Revision: D21339941
      Pulled By: zhichao-cao
      fbshipit-source-id: 434482b492287e6722c7cd5a706f057c5ec170ce
    • Yanqin Jin's avatar
      Add Github Action for some basic sanity test of PR (#6761) · 6acbbbf9
      Yanqin Jin authored
      Add Github Action to perform some basic sanity check for PR, inclding the
      1) Buck TARGETS file.
      On the one hand, The TARGETS file is used for internal buck, and we do not
      manually update it. On the other hand, we need to run the buckifier scripts to
      update TARGETS whenever new files are added, etc. With this Github Action, we
      make sure that every PR does not forget this step. The GH Action uses
      a Makefile target called check-buck-targets. Users can manually run `make
      check-buck-targets` on local machine.
      2) Code format
      We use clang-format-diff.py to format our code. The GH Action in this PR makes
      sure this step is not skipped. The checking script build_tools/format-diff.sh assumes that `clang-format-diff.py` is executable.
      On host running GH Action, it is difficult to download `clang-format-diff.py` and make it
      executable. Therefore, we modified build_tools/format-diff.sh to handle the case in which there is a non-executable clang-format-diff.py file in the top-level rocksdb repo directory.
      Test Plan (Github and devserver):
      Watch for Github Action result in the `Checks` tab.
      On dev server
      make check-format
      make check-buck-targets
      make check
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6761
      Test Plan: Watch for Github Action result in the `Checks` tab.
      Reviewed By: pdillinger
      Differential Revision: D21260209
      Pulled By: riversand963
      fbshipit-source-id: c646e2f37c6faf9f0614b68aa0efc818cff96787
    • sdong's avatar
      Remove the support of setting CompressionOptions.parallel_threads from string for now (#6782) · 6504ae0c
      sdong authored
      The current way of implementing CompressionOptions.parallel_threads introduces a format change. We plan to change CompressionOptions's serailization format to a new JSON-like format, which would be another format change. We would like to consolidate the two format changes into one, rather than making some users to change twice. Hold CompressionOptions.parallel_threads from being supported by option string for now. Will add it back after the general CompressionOptions's format change.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6782
      Test Plan: Run all existing tests.
      Reviewed By: zhichao-cao
      Differential Revision: D21338614
      fbshipit-source-id: bca2dac3cb37d4e6e64b52cbbe8ea749cd848685
    • Cheng Chang's avatar
      Make users explicitly be aware of prepare before commit (#6775) · ef0c3eda
      Cheng Chang authored
      In current commit protocol of pessimistic transaction, if the transaction is not prepared before commit, the commit protocol implicitly assumes that the user wants to commit without prepare.
      This PR adds TransactionOptions::skip_prepare, the default value is `true` because if set to `false`, all existing users who commit without prepare need to update their code to set skip_prepare to true. Although this does not force the user to explicitly express their intention of skip_prepare, it at least lets the user be aware of the assumption of being able to commit without prepare.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6775
      Test Plan: added a new unit test TransactionTest::CommitWithoutPrepare
      Reviewed By: lth
      Differential Revision: D21313270
      Pulled By: cheng-chang
      fbshipit-source-id: 3d95b7c9b2d6cdddc09bdd66c561bc4fae8c3251
    • sdong's avatar
      Disallow BlockBasedTableBuilder to set status from non-OK (#6776) · 079e50d2
      sdong authored
      There is no systematic mechanism to prevent BlockBasedTableBuilder's status to be set from non-OK to OK. Adding a mechanism to force this will help us prevent failures in the future.
      The solution is to only make it possible to set the status code if the status code to set is not OK.
      Since the status code passed to CompressAndVerifyBlock() is changed, a mini refactoring is done too so that the output arguments are changed from reference to pointers, based on Google C++ Style.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6776
      Test Plan: Run all existing test.
      Reviewed By: pdillinger
      Differential Revision: D21314382
      fbshipit-source-id: 27000c10f1e4c121661e026548d6882066409375
    • sdong's avatar
      Flag CompressionOptions::parallel_threads to be experimental (#6781) · 6277e280
      sdong authored
      The feature of CompressionOptions::parallel_threads is still not yet mature. Mention it to be experimental in the comments for now.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6781
      Reviewed By: pdillinger
      Differential Revision: D21330678
      fbshipit-source-id: d7dd7d099fb002a5c6a5d8da689ce5ee08a9eb13
    • anand76's avatar
      Pass a timeout to FileSystem for random reads (#6751) · ab13d43e
      anand76 authored
      Calculate ```IOOptions::timeout``` using ```ReadOptions::deadline``` and pass it to ```FileSystem::Read/FileSystem::MultiRead```. This allows us to impose a tighter bound on the time taken by Get/MultiGet on FileSystem/Envs that support IO timeouts. Even on those that don't support, check in ```RandomAccessFileReader::Read``` and ```MultiRead``` and return ```Status::TimedOut()``` if the deadline is exceeded.
      For now, TableReader creation, which might do file opens and reads, are not covered. It will be implemented in another PR.
      Update existing unit tests to verify the correct timeout value is being passed
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6751
      Reviewed By: riversand963
      Differential Revision: D21285631
      Pulled By: anand1976
      fbshipit-source-id: d89af843e5a91ece866e87aa29438b52a65a8567
    • Peter Dillinger's avatar
      Fix assertion that can fail on sst corruption (#6780) · eecd8fba
      Peter Dillinger authored
      An assertion that a char == a CompressionType (unsigned char)
      originally cast from a char can fail if the original value is negative,
      due to numeric promotion.  The assertion should pass even if the value
      is invalid CompressionType, because the callee
      UncompressBlockContentsForCompressionType checks for that and reports
      status appropriately.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6780
      Test Plan:
      Temporarily change kZSTD = 0x88 and see tests fail. Make this
      change (in addition), and tests pass.
      Reviewed By: siying
      Differential Revision: D21328498
      Pulled By: pdillinger
      fbshipit-source-id: 61caf8d815581ce49261ecb7ab0f396e9ac4bb92
    • Levi Tamasi's avatar
      Keep track of obsolete blob files in VersionSet (#6755) · fe238e54
      Levi Tamasi authored
      The patch adds logic to keep track of obsolete blob files. A blob file becomes
      obsolete when the last `shared_ptr` that points to the corresponding
      `SharedBlobFileMetaData` object goes away, which, in turn, happens when the
      last `Version` that contains the blob file is destroyed. No longer needed blob
      files are added to the obsolete list in `VersionSet` using a custom deleter to
      avoid unnecessary coupling between `SharedBlobFileMetaData` and `VersionSet`.
      Obsolete blob files are returned by `VersionSet::GetObsoleteFiles` and stored
      in `JobContext`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6755
      Test Plan: `make check`
      Reviewed By: riversand963
      Differential Revision: D21233155
      Pulled By: ltamasi
      fbshipit-source-id: 47757e06fdc0127f27ed57f51abd27893d9a7b7a
    • Adam Retter's avatar
      Add Slack forum to README (#6773) · cf342464
      Adam Retter authored
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/6773
      Reviewed By: siying
      Differential Revision: D21310229
      Pulled By: pdillinger
      fbshipit-source-id: c0d52d0c51121d307d7d5c1374abc7bf78b0c4cf
    • Ziyue Yang's avatar
      Add an option for parallel compression in for db_stress (#6722) · e619a20e
      Ziyue Yang authored
      This commit adds an `compression_parallel_threads` option in
      db_stress. It also fixes the naming of parallel compression
      option in db_bench to keep it aligned with others.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6722
      Reviewed By: pdillinger
      Differential Revision: D21091385
      fbshipit-source-id: c9ba8c4e5cc327ff9e6094a6dc6a15fcff70f100
  4. 30 Apr, 2020 4 commits
  5. 29 Apr, 2020 6 commits
    • Peter Dillinger's avatar
      Fix LITE build (#6770) · 8086e5e2
      Peter Dillinger authored
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/6770
      Test Plan: make LITE=1 check
      Reviewed By: ajkr
      Differential Revision: D21296261
      Pulled By: pdillinger
      fbshipit-source-id: b6075cc13a6d6db48617b7e0e9ebeea9364dfd9f
    • anand76's avatar
      Fix a valgrind failure due to DBBasicTestMultiGetDeadline (#6756) · 335ea73e
      anand76 authored
      Fix a valgrind failure.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6756
      Test Plan: valgrind_test
      Reviewed By: pdillinger
      Differential Revision: D21284660
      Pulled By: anand1976
      fbshipit-source-id: 39bf1bd130b6adb585ddbf2f9aa2f53dbf666f80
    • mrambacher's avatar
      Add Functions to OptionTypeInfo (#6422) · 618bf638
      mrambacher authored
      Added functions for parsing, serializing, and comparing elements to OptionTypeInfo.  These functions allow all of the special cases that could not be handled directly in the map of OptionTypeInfo to be moved into the map.  Using these functions, every type can be handled via the map rather than special cased.
      By adding these functions, the code for handling options can become more standardized (fewer special cases) and (eventually) handled completely by common classes.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6422
      Test Plan: pass make check
      Reviewed By: siying
      Differential Revision: D21269005
      Pulled By: zhichao-cao
      fbshipit-source-id: 9ba71c721a38ebf9ee88259d60bd81b3282b9077
    • Peter Dillinger's avatar
      Clarifying comments in db.h (#6768) · b810e62b
      Peter Dillinger authored
      And fix a confusingly worded log message
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6768
      Reviewed By: anand1976
      Differential Revision: D21284527
      Pulled By: pdillinger
      fbshipit-source-id: f03c1422c229a901c3a65e524740452349626164
    • Peter Dillinger's avatar
      Basic MultiGet support for partitioned filters (#6757) · bae6f586
      Peter Dillinger authored
      In MultiGet, access each applicable filter partition only once
      per batch, rather than for each applicable key. Also,
      * Fix Bloom stats for MultiGet
      * Fix/refactor MultiGetContext::Range::KeysLeft, including
      * Add efficient BitsSetToOne implementation
      * Assert that MultiGetContext::Range does not go beyond shift range
      Performance test: Generate db:
          $ ./db_bench --benchmarks=fillrandom --num=15000000 --cache_index_and_filter_blocks -bloom_bits=10 -partition_index_and_filters=true
      Before (middle performing run of three; note some missing Bloom stats):
          $ ./db_bench --use-existing-db --benchmarks=multireadrandom --num=15000000 --cache_index_and_filter_blocks --bloom_bits=10 --threads=16 --cache_size=20000000 -partition_index_and_filters -batch_size=32 -multiread_batched -statistics --duration=20 2>&1 | egrep 'micros/op|block.cache.filter.hit|bloom.filter.(full|use)|number.multiget'
          multireadrandom :      26.403 micros/op 597517 ops/sec; (548427 of 671968 found)
          rocksdb.block.cache.filter.hit COUNT : 83443275
          rocksdb.bloom.filter.useful COUNT : 0
          rocksdb.bloom.filter.full.positive COUNT : 0
          rocksdb.bloom.filter.full.true.positive COUNT : 7931450
          rocksdb.number.multiget.get COUNT : 385984
          rocksdb.number.multiget.keys.read COUNT : 12351488
          rocksdb.number.multiget.bytes.read COUNT : 793145000
          rocksdb.number.multiget.keys.found COUNT : 7931450
      After (middle performing run of three):
          $ ./db_bench_new --use-existing-db --benchmarks=multireadrandom --num=15000000 --cache_index_and_filter_blocks --bloom_bits=10 --threads=16 --cache_size=20000000 -partition_index_and_filters -batch_size=32 -multiread_batched -statistics --duration=20 2>&1 | egrep 'micros/op|block.cache.filter.hit|bloom.filter.(full|use)|number.multiget'
          multireadrandom :      21.024 micros/op 752963 ops/sec; (705188 of 863968 found)
          rocksdb.block.cache.filter.hit COUNT : 49856682
          rocksdb.bloom.filter.useful COUNT : 45684579
          rocksdb.bloom.filter.full.positive COUNT : 10395458
          rocksdb.bloom.filter.full.true.positive COUNT : 9908456
          rocksdb.number.multiget.get COUNT : 481984
          rocksdb.number.multiget.keys.read COUNT : 15423488
          rocksdb.number.multiget.bytes.read COUNT : 990845600
          rocksdb.number.multiget.keys.found COUNT : 9908456
      So that's about 25% higher throughput even for random keys
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6757
      Test Plan: unit test included
      Reviewed By: anand1976
      Differential Revision: D21243256
      Pulled By: pdillinger
      fbshipit-source-id: 5644a1468d9e8c8575be02f4e04bc5d62dbbb57f
    • Peter Dillinger's avatar
      HISTORY.md update for bzip upgrade (#6767) · a7f0b27b
      Peter Dillinger authored
      See https://github.com/facebook/rocksdb/issues/6714 and https://github.com/facebook/rocksdb/issues/6703
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6767
      Reviewed By: riversand963
      Differential Revision: D21283307
      Pulled By: pdillinger
      fbshipit-source-id: 8463bec725669d13846c728ad4b5bde43f9a84f8
  6. 28 Apr, 2020 8 commits
    • Peter Dillinger's avatar
      Update HISTORY.md for block cache redundant adds (#6764) · 4574d751
      Peter Dillinger authored
      Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/6764
      Reviewed By: ltamasi
      Differential Revision: D21267108
      Pulled By: pdillinger
      fbshipit-source-id: a3dfe2dbe4e8f6309a53eb72903ef58d52308f97
    • Yanqin Jin's avatar
      Fix timestamp support for MultiGet (#6748) · d4398e08
      Yanqin Jin authored
      1. Avoid nullptr dereference when passing timestamp to KeyContext creation.
      2. Construct LookupKey correctly with timestamp when creating MultiGetContext.
      3. Compare without timestamp when sorting KeyContexts.
      Fixes https://github.com/facebook/rocksdb/issues/6745
      Test plan (dev server):
      make check
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6748
      Reviewed By: pdillinger
      Differential Revision: D21258691
      Pulled By: riversand963
      fbshipit-source-id: 44e65b759c18b9986947783edf03be4f890bb004
    • Cheng Chang's avatar
      Fix build under LITE (#6758) · 4cd859ed
      Cheng Chang authored
      GetSupportedCompressions needs to be defined under LITE.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6758
      Test Plan: build under LITE
      Reviewed By: zhichao-cao
      Differential Revision: D21247937
      Pulled By: cheng-chang
      fbshipit-source-id: 880e59d3e107cdd736d16427a68c5641d1318fb4
    • Levi Tamasi's avatar
      Destroy any ColumnFamilyHandles in BlobDB::Open upon error (#6763) · bea91d5d
      Levi Tamasi authored
      If an error happens during BlobDBImpl::Open after the base DB has been
      opened, we need to destroy the `ColumnFamilyHandle`s returned by `DB::Open`
      to prevent an assertion in `ColumnFamilySet`'s destructor from being hit.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6763
      Test Plan: Ran `make check` and tested using the BlobDB mode of `db_bench`.
      Reviewed By: riversand963
      Differential Revision: D21262643
      Pulled By: ltamasi
      fbshipit-source-id: 60ebc7ab19be66cf37fbe5f6d8957d58470f3d3b
    • Albert Hse-Lin Chen's avatar
      Fixed minor typo in comment for MergeOperator::FullMergeV2() (#6759) · cc8d16ef
      Albert Hse-Lin Chen authored
      Fixed minor typo in comment for FullMergeV2().
      Last operand up to snapshot should be +4 instead of +3.
      Signed-off-by: default avatarAlbert Hse-Lin Chen <hselin@kalista.io>
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6759
      Reviewed By: cheng-chang
      Differential Revision: D21260295
      Pulled By: zhichao-cao
      fbshipit-source-id: cc942306f246c8606538feb30bfdf6df9fb6c54e
    • Peter Dillinger's avatar
      Stats for redundant insertions into block cache (#6681) · 249eff0f
      Peter Dillinger authored
      Since read threads do not coordinate on loading data into block
      cache, two threads between Lookup and Insert can end up loading and
      inserting the same data. This is particularly concerning with
      cache_index_and_filter_blocks since those are hot and more likely to
      be race targets if ejected from (or not pre-populated in) the cache.
      Particularly with moves toward disaggregated / network storage, the cost
      of redundant retrieval might be high, and we should at least have some
      hard statistics from which we can estimate impact.
      Example with full filter thrashing "cliff":
          $ ./db_bench --benchmarks=fillrandom --num=15000000 --cache_index_and_filter_blocks -bloom_bits=10
          $ ./db_bench --db=/tmp/rocksdbtest-172704/dbbench --use_existing_db --benchmarks=readrandom,stats --num=200000 --cache_index_and_filter_blocks --cache_size=$((130 * 1024 * 1024)) --bloom_bits=10 --threads=16 -statistics 2>&1 | egrep '^rocksdb.block.cache.(.*add|.*redundant)' | grep -v compress | sort
          rocksdb.block.cache.add COUNT : 14181
          rocksdb.block.cache.add.failures COUNT : 0
          rocksdb.block.cache.add.redundant COUNT : 476
          rocksdb.block.cache.data.add COUNT : 12749
          rocksdb.block.cache.data.add.redundant COUNT : 18
          rocksdb.block.cache.filter.add COUNT : 1003
          rocksdb.block.cache.filter.add.redundant COUNT : 217
          rocksdb.block.cache.index.add COUNT : 429
          rocksdb.block.cache.index.add.redundant COUNT : 241
          $ ./db_bench --db=/tmp/rocksdbtest-172704/dbbench --use_existing_db --benchmarks=readrandom,stats --num=200000 --cache_index_and_filter_blocks --cache_size=$((120 * 1024 * 1024)) --bloom_bits=10 --threads=16 -statistics 2>&1 | egrep '^rocksdb.block.cache.(.*add|.*redundant)' | grep -v compress | sort
          rocksdb.block.cache.add COUNT : 1182223
          rocksdb.block.cache.add.failures COUNT : 0
          rocksdb.block.cache.add.redundant COUNT : 302728
          rocksdb.block.cache.data.add COUNT : 31425
          rocksdb.block.cache.data.add.redundant COUNT : 12
          rocksdb.block.cache.filter.add COUNT : 795455
          rocksdb.block.cache.filter.add.redundant COUNT : 130238
          rocksdb.block.cache.index.add COUNT : 355343
          rocksdb.block.cache.index.add.redundant COUNT : 172478
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6681
      Test Plan: Some manual testing (above) and unit test covering key metrics is included
      Reviewed By: ltamasi
      Differential Revision: D21134113
      Pulled By: pdillinger
      fbshipit-source-id: c11497b5f00f4ffdfe919823904e52d0a1a91d87
    • Akanksha Mahajan's avatar
      Allow sst_dump to check size of different compression levels and report time (#6634) · 75b13ea9
      Akanksha Mahajan authored
      Summary : 1. Add two arguments --compression_level_from and --compression_level_to to check
      	  the compression size with different compression level in the given range. Users must
                specify one compression type else it will error out. Both from and to levels must
      	  also be specified together.
      	  2. Display the time taken to compress each file with different compressions by default.
      Test Plan : make -j64 check
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6634
      Test Plan: make -j64 check
      Reviewed By: anand1976
      Differential Revision: D20810282
      Pulled By: akankshamahajan15
      fbshipit-source-id: ac9098d3c079a1fad098f6678dbedb4d888a791b
    • Peter Dillinger's avatar
      Understand common build variables passed as make variables (#6740) · 791e5714
      Peter Dillinger authored
      Some common build variables like USE_CLANG and
      COMPILE_WITH_UBSAN did not work if specified as make variables, as in
      `make USE_CLANG=1 check` etc. rather than (in theory less hygienic)
      `USE_CLANG=1 make check`. This patches Makefile to export some commonly
      used ones to build_detect_platform so that they work. (I'm skeptical of
      a broad `export` in Makefile because it's hard to predict how random
      make variables might affect various invoked tools.)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6740
      Test Plan: manual / CI
      Reviewed By: siying
      Differential Revision: D21229011
      Pulled By: pdillinger
      fbshipit-source-id: b00c69b23eb2a13105bc8d860ce2d1e61ac5a355
  7. 27 Apr, 2020 1 commit
    • Yanqin Jin's avatar
      Update buckifier to unblock future internal release (#6726) · 3b2f2719
      Yanqin Jin authored
      Some recent PRs added new source files or modified TARGETS file manually.
      During next internal release, executing the following command will revert the
      manual changes.
      Update buckifier so that the following command
      python buckfier/buckify_rocksdb.py
      does not change TARGETS file.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6726
      Test Plan:
      python buckifier/buckify_rocksdb.py
      Reviewed By: siying
      Differential Revision: D21098930
      Pulled By: riversand963
      fbshipit-source-id: e884f507fefef88163363c9097a460c98f1ed850
  8. 25 Apr, 2020 4 commits
    • Cheng Chang's avatar
      Disable O_DIRECT in stress test when db directory does not support direct IO (#6727) · 0a776178
      Cheng Chang authored
      In crash test, the db directory might be set to /dev/shm or /tmp, in certain environments such as internal testing infrastructure, neither of these directories support direct IO, so direct IO is never enabled in crash test.
      This PR sets up SyncPoints in direct IO related code paths to disable O_DIRECT flag in calls to `open`, so the direct IO code paths will be executed, all direct IO related assertions will be checked, but no real direct IO request will be issued to the file system.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6727
      Test Plan:
      export CRASH_TEST_EXT_ARGS="--use_direct_reads=1 --mmap_read=0"
      make -j24 crash_test
      Reviewed By: zhichao-cao
      Differential Revision: D21139250
      Pulled By: cheng-chang
      fbshipit-source-id: db9adfe78d91aa4759835b1af91c5db7b27b62ee
    • Cheng Chang's avatar
      Reduce memory copies when fetching and uncompressing blocks from SST files (#6689) · 40497a87
      Cheng Chang authored
      In https://github.com/facebook/rocksdb/pull/6455, we modified the interface of `RandomAccessFileReader::Read` to be able to get rid of memcpy in direct IO mode.
      This PR applies the new interface to `BlockFetcher` when reading blocks from SST files in direct IO mode.
      Without this PR, in direct IO mode, when fetching and uncompressing compressed blocks, `BlockFetcher` will first copy the raw compressed block into `BlockFetcher::compressed_buf_` or `BlockFetcher::stack_buf_` inside `RandomAccessFileReader::Read` depending on the block size. then during uncompressing, it will copy the uncompressed block into `BlockFetcher::heap_buf_`.
      In this PR, we get rid of the first memcpy and directly uncompress the block from `direct_io_buf_` to `heap_buf_`.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6689
      Test Plan: A new unit test `block_fetcher_test` is added.
      Reviewed By: anand1976
      Differential Revision: D21006729
      Pulled By: cheng-chang
      fbshipit-source-id: 2370b92c24075692423b81277415feb2aed5d980
    • Cheng Chang's avatar
      Fix unused variable of r in release mode (#6750) · 1758f76f
      Cheng Chang authored
      In release mode, asserts are not compiled, so `r` is not used, causing compiler warnings.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6750
      Test Plan: make check under release mode
      Reviewed By: anand1976
      Differential Revision: D21220365
      Pulled By: cheng-chang
      fbshipit-source-id: fd4afa9843d54af68c4da8660ec61549803e1167
    • anand76's avatar
      Silence false alarms in db_stress fault injection (#6741) · 9e7b7e2c
      anand76 authored
      False alarms are caused by codepaths that intentionally swallow IO
      make crash_test
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6741
      Reviewed By: ltamasi
      Differential Revision: D21181138
      Pulled By: anand1976
      fbshipit-source-id: 5ccfbc68eb192033488de6269e59c00f2c65ce00
  9. 24 Apr, 2020 2 commits