Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/llvm/llvm-project.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
  1. Oct 07, 2021
    • Frederic Cambus's avatar
      [CMake] Fix typo in error message for LLD in bootstrap builds. · f0ffff43
      Frederic Cambus authored
      Reviewed By: xgupta
      
      Differential Revision: https://reviews.llvm.org/D110836
      f0ffff43
    • Shivam Gupta's avatar
    • Pengxuan Zheng's avatar
      [ARM] Fix a bug in finding a pair of extracts to create VMOVRRD · b0045f55
      Pengxuan Zheng authored
      D100244 missed a check on the ResNo of the extract's operand 0 when finding a
      pair of extracts to combine into a VMOVRRD (extract(x, n); extract(x, n+1) ->
      VMOVRRD(extract x, n/2)). As a result, it can incorrectly pair an extract(x, n)
      with another extract(x:3, n+1) for example. This patch fixes the bug by adding
      the proper check on ResNo.
      
      Reviewed By: dmgreen
      
      Differential Revision: https://reviews.llvm.org/D111188
      b0045f55
    • Arthur Eubanks's avatar
      [IR] Increase max alignment to 4GB · df84c1fe
      Arthur Eubanks authored
      Currently the max alignment representable is 1GB, see D108661.
      Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.
      
      This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits.
      We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.
      
      The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.
      
      Updating clang's max allowed alignment will come in a future patch.
      
      Reviewed By: hans
      
      Differential Revision: https://reviews.llvm.org/D110451
      df84c1fe
    • Arthur Eubanks's avatar
      [clang] Allow printing 64 bit ints in diagnostics · afdac5fb
      Arthur Eubanks authored
      Currently we're limited to 32 bit ints in diagnostics.
      With support for 4GB alignments coming soon, we need to report 4GB as the max alignment allowed.
      I've tested that this does indeed properly print 2^32.
      
      Reviewed By: rsmith
      
      Differential Revision: https://reviews.llvm.org/D111184
      afdac5fb
    • Max Kazantsev's avatar
      [Test] Add LoopPeel test for loops with profile data available · 00eec5c1
      Max Kazantsev authored
      Patch by Dmitry Makogon!
      00eec5c1
    • Gabor Marton's avatar
      [analyzer][NFC] Add RangeSet::dump · b8f6c85a
      Gabor Marton authored
      This tiny change improves the debugging experience of the solver a lot!
      
      Differential Revision: https://reviews.llvm.org/D110911
      b8f6c85a
    • Geoffrey Martin-Noble's avatar
      [MLIR] Improve debug messages in BuiltinTypes · b096ac90
      Geoffrey Martin-Noble authored
      It's nice for users to have more information when debugging failures and
      these are only triggered in a failure path.
      
      Reviewed By: mehdi_amini
      
      Differential Revision: https://reviews.llvm.org/D107676
      b096ac90
    • Alexandre Rames's avatar
      [MLIR] Rename Shape dialect's `join` to `meet`. · fd961332
      Alexandre Rames authored
      For the type lattice, we (now) use the "less specialized or equal" partial
      order, leading to the bottom representing the empty set, and the top
      representing any type.
      
      This naming is more in line with the generally used conventions, where the top
      of the lattice is the full set, and the bottom of the lattice is the empty set.
      A typical example is the powerset of a finite set: generally, meet would be the
      intersection, and join would be the union.
      
      ```
      top:  {a,b,c}
           /   |   \
       {a,b} {a,c} {b,c}
         |  X     X  |
         {a} { b } {c}
            \  |  /
      bottom: { }
      ```
      
      This is in line with the examined lattice representations in LLVM:
      * lattice for `BitTracker::BitValue` in `Hexagon/BitTracker.h`
      * lattice for constant propagation in `HexagonConstPropagation.cpp`
      * lattice in `VarLocBasedImpl.cpp`
      * lattice for address space inference code in `InferAddressSpaces.cpp`
      
      Reviewed By: silvas, jpienaar
      
      Differential Revision: https://reviews.llvm.org/D110766
      fd961332
    • Nikita Popov's avatar
      [BasicAA] Don't unnecessarily extend pointer size · 1301a8b4
      Nikita Popov authored
      BasicAA GEP decomposition currently performs all calculation on the
      maximum pointer size, but at least 64-bit, with an option to double
      the size. The code comment claims that this improves analysis power
      when working with uint64_t indices on 32-bit systems. However, I don't
      see how this can be, at least while maintaining correctness:
      
      When working on canonical code, the GEP indices will have GEP index
      size. If the original code worked on uint64_t with a 32-bit size_t,
      then there will be truncs inserted before use as a GEP index. Linear
      expression decomposition does not look through truncs, so this will
      be an opaque value as far as GEP decomposition is concerned. Working
      on a wider pointer size does not help here (or have any effect at all).
      
      When working on non-canonical code (before first InstCombine), the
      GEP indices are implicitly truncated to GEP index size. The BasicAA
      code currently just ignores this fact completely, and pretends that
      this truncation doesn't happen. This is incorrect and will be
      addressed by D110977.
      
      I believe that for correctness reasons, it is important to work on
      the actual GEP index size to properly model potential overflow.
      BasicAA tries to patch over the fact that it uses the wrong size
      (see adjustToPointerSize), but it only does that in limited cases
      (only for constant values, and not all of them either). I'd like to
      move this code towards always working on the correct size, and
      dropping these artificial pointer size adjustments is the first step
      towards that.
      
      Differential Revision: https://reviews.llvm.org/D110657
      1301a8b4
    • Sanjay Patel's avatar
      [InstSimplify] (x | y) & (x | !y) --> x · e36d351d
      Sanjay Patel authored
      https://alive2.llvm.org/ce/z/QagQMn
      
      This fold is handled by instcombine via SimplifyUsingDistributiveLaws(),
      but we are missing the sibliing fold for 'logical and' (implemented with
      'select'). Retrofitting the code in instcombine looks much harder
      than just adding a small adjustment here, and this is potentially more
      efficient and beneficial to other passes.
      e36d351d
    • Sanjay Patel's avatar
    • Gabor Marton's avatar
      [analyzer][solver] Fix CmpOpTable handling bug · 792be5df
      Gabor Marton authored
      There is an error in the implementation of the logic of reaching the `Unknonw` tristate in CmpOpTable.
      
      ```
      void cmp_op_table_unknownX2(int x, int y, int z) {
        if (x >= y) {
                          // x >= y    [1, 1]
          if (x + z < y)
            return;
                          // x + z < y [0, 0]
          if (z != 0)
            return;
                          // x < y     [0, 0]
          clang_analyzer_eval(x > y);  // expected-warning{{TRUE}} expected-warning{{FALSE}}
        }
      }
      ```
      We miss the `FALSE` warning because the false branch is infeasible.
      
      We have to exploit simplification to discover the bug. If we had `x < y`
      as the second condition then the analyzer would return the parent state
      on the false path and the new constraint would not be part of the State.
      But adding `z` to the condition makes both paths feasible.
      
      The root cause of the bug is that we reach the `Unknown` tristate
      twice, but in both occasions we reach the same `Op` that is `>=` in the
      test case. So, we reached `>=` twice, but we never reached `!=`, thus
      querying the `Unknonw2x` column with `getCmpOpStateForUnknownX2` is
      wrong.
      
      The solution is to ensure that we reached both **different** `Op`s once.
      
      Differential Revision: https://reviews.llvm.org/D110910
      792be5df
    • Michael Forster's avatar
      Revert "[lldb] Remove "dwarf dynamic register size expressions" from RegisterInfo" · b2c906da
      Michael Forster authored
      This reverts commit 00e704bf.
      
      This commit should should have updated
      llvm/llvm-project/lldb/source/Plugins/ABI/ARC/ABISysV_arc.cpp like the other
      architectures.
      b2c906da
  2. Oct 06, 2021
Loading