aboutsummaryrefslogtreecommitdiffstats
path: root/rjit_c.rb
Commit message (Collapse)AuthorAgeFilesLines
* show warning for unused blockKoichi Sasada2024-04-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | With verbopse mode (-w), the interpreter shows a warning if a block is passed to a method which does not use the given block. Warning on: * the invoked method is written in C * the invoked method is not `initialize` * not invoked with `super` * the first time on the call-site with the invoked method (`obj.foo{}` will be warned once if `foo` is same method) [Feature #15554] `Primitive.attr! :use_block` is introduced to declare that primitive functions (written in C) will use passed block. For minitest, test needs some tweak, so use https://github.com/minitest/minitest/commit/ea9caafc0754b1d6236a490d59e624b53209734a for `test-bundled-gems`.
* Move FL_SINGLETON to FL_USER1Jean Boussier2024-03-061-1/+4
| | | | | | | | This frees FL_USER0 on both T_MODULE and T_CLASS. Note: prior to this, FL_SINGLETON was never set on T_MODULE, so checking for `FL_SINGLETON` without first checking that `FL_TYPE` was `T_CLASS` was valid. That's no longer the case.
* Update a stubbed type for RJITTakashi Kokubun2024-03-011-4/+4
| | | | cfunc.func is actually used by RJIT
* Update bindgen for YJIT and RJITTakashi Kokubun2024-03-011-1/+5
|
* [PRISM] Provide runtime flag for prism in iseqKevin Newton2024-02-211-4/+5
|
* Bump the required BASERUBY version to 3.0 (#9976)Takashi Kokubun2024-02-151-63/+16
|
* Introduce Allocationless Anonymous Splat ForwardingJeremy Evans2024-01-241-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ruby makes it easy to delegate all arguments from one method to another: ```ruby def f(*args, **kw) g(*args, **kw) end ``` Unfortunately, this indirection decreases performance. One reason it decreases performance is that this allocates an array and a hash per call to `f`, even if `args` and `kw` are not modified. Due to Ruby's ability to modify almost anything at runtime, it's difficult to avoid the array allocation in the general case. For example, it's not safe to avoid the allocation in a case like this: ```ruby def f(*args, **kw) foo(bar) g(*args, **kw) end ``` Because `foo` may be `eval` and `bar` may be a string referencing `args` or `kw`. To fix this correctly, you need to perform something similar to escape analysis on the variables. However, there is a case where you can avoid the allocation without doing escape analysis, and that is when the splat variables are anonymous: ```ruby def f(*, **) g(*, **) end ``` When splat variables are anonymous, it is not possible to reference them directly, it is only possible to use them as splats to other methods. Since that is the case, if `f` is called with a regular splat and a keyword splat, it can pass the arguments directly to `g` without copying them, avoiding allocation. For example: ```ruby def g(a, b:) a + b end def f(*, **) g(*, **) end a = [1] kw = {b: 2} f(*a, **kw) ``` I call this technique: Allocationless Anonymous Splat Forwarding. This is implemented using a couple additional iseq param flags, anon_rest and anon_kwrest. If anon_rest is set, and an array splat is passed when calling the method when the array splat can be used without modification, `setup_parameters_complex` does not duplicate it. Similarly, if anon_kwest is set, and a keyword splat is passed when calling the method, `setup_parameters_complex` does not duplicate it.
* Leave a comment about the limitation of PrimitiveTakashi Kokubun2024-01-231-4/+8
| | | | and adjust some code styling from that PR.
* `cexpr!` must be up to one per line nowNobuyoshi Nakada2024-01-221-2/+4
|
* RJIT: Properly reject keyword splat with `yield`Alan Wu2024-01-181-0/+1
| | | | See the fix for YJIT.
* Drop obsoleted BUILTIN_ATTR_NO_GC attributeTakashi Kokubun2024-01-161-1/+0
| | | | | The thing that has used this in the past was very buggy, and we've never revisied it. Let's remove it until we need it again.
* Do not `poll` firstKoichi Sasada2024-01-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, the MN scheduler waits for the IO with the following steps: 1. `poll(fd, timeout=0)` to check fd is ready or not. 2. if fd is not ready, waits with MN thread scheduler 3. call `func` to issue the blocking I/O call The advantage of advanced `poll()` is we can wait for the IO ready for any fds. However `poll()` becomes overhead for already ready fds. This patch changes the steps like: 1. call `func` to issue the blocking I/O call 2. if the `func` returns `EWOULDBLOCK` the fd is `O_NONBLOCK` and we need to wait for fd is ready so that waits with MN thread scheduler. In this case, we can wait only for `O_NONBLOCK` fds. Otherwise it waits with blocking operations such as `read()` system call. However we don't need to call `poll()` to check fd is ready in advance. With this patch we can observe performance improvement on microbenchmark which repeats blocking I/O (not `O_NONBLOCK` fd) with and without MN thread scheduler. ```ruby require 'benchmark' f = open('/dev/null', 'w') f.sync = true TN = 1 N = 1_000_000 / TN Benchmark.bm{|x| x.report{ TN.times.map{ Thread.new{ N.times{f.print '.'} } }.each(&:join) } } __END__ TN = 1 user system total real ruby32 0.393966 0.101122 0.495088 ( 0.495235) ruby33 0.493963 0.089521 0.583484 ( 0.584091) ruby33+MN 0.639333 0.200843 0.840176 ( 0.840291) <- Slow this+MN 0.512231 0.099091 0.611322 ( 0.611074) <- Good ```
* RJIT: Distinguish Pointer with ArrayTakashi Kokubun2023-12-221-4/+4
| | | | This is more convenient for accessing those fields.
* RJIT: Update bindgenTakashi Kokubun2023-12-211-2/+3
|
* RJIT: Rename pause/resume to disable/enableTakashi Kokubun2023-12-211-1/+1
| | | | | | like YJIT. They don't work in the same way yet, but it's nice to make the naming consistent first so that we will not need to rename them later.
* RJIT: Share rb_vm_insns_count for vm_insns_countTakashi Kokubun2023-12-181-1/+4
|
* Thread specific storage APIsKoichi Sasada2023-12-081-0/+1
| | | | | | | | | | | | | | | | | | | | | This patch introduces thread specific storage APIs for tools which use `rb_internal_thread_event_hook` APIs. * `rb_internal_thread_specific_key_create()` to create a tool specific thread local storage key and allocate the storage if not available. * `rb_internal_thread_specific_set()` sets a data to thread and tool specific storage. * `rb_internal_thread_specific_get()` gets a data in thread and tool specific storage. Note that `rb_internal_thread_specific_get|set(thread_val, key)` can be called without GVL and safe for async signal and safe for multi-threading (native threads). So you can call it in any internal thread event hooks. Further more you can call it from other native threads. Of course `thread_val` should be living while accessing the data from this function. Note that you should not forget to clean up the set data.
* Revert "Revert "Remove SHAPE_CAPACITY_CHANGE shapes""Peter Zhu2023-11-131-1/+0
| | | | This reverts commit 5f3fb4f4e397735783743fe52a7899b614bece20.
* Revert "Remove SHAPE_CAPACITY_CHANGE shapes"Peter Zhu2023-11-101-0/+1
| | | | | | | This reverts commit f6910a61122931e4193bcc0fad18d839c319b720. We're seeing crashes in the test suite of Shopify's core monolith after this change.
* Remove SHAPE_CAPACITY_CHANGE shapesPeter Zhu2023-11-091-1/+0
| | | | | We don't need to create a shape to transition capacity as we can transition the capacity when the capacity of the SHAPE_IVAR changes.
* Refactor rb_shape_transition_shape_capa outJean Boussier2023-11-081-6/+0
| | | | | | | | | | | | | | | | Right now the `rb_shape_get_next` shape caller need to first check if there is capacity left, and if not call `rb_shape_transition_shape_capa` before it can call `rb_shape_get_next`. And on each of these it needs to checks if we got a TOO_COMPLEX back. All this logic is duplicated in the interpreter, YJIT and RJIT. Instead we can have `rb_shape_get_next` do the capacity transition when needed. The caller can compare the old and new shapes capacity to know if resizing is needed. It also can check for TOO_COMPLEX only once.
* Make every initial size pool shape a root shapePeter Zhu2023-11-021-1/+0
| | | | | This commit makes every initial size pool shape a root shape and assigns it a capacity of 0.
* Use a functional red-black tree for indexing the shapesAaron Patterson2023-10-241-0/+5
| | | | | | | | | | | | | | | | | | | | | | | This is an experimental commit that uses a functional red-black tree to create an index of the ancestor shapes. It uses an Okasaki style functional red black tree: https://www.cs.tufts.edu/comp/150FP/archive/chris-okasaki/redblack99.pdf This tree is advantageous because: * It offers O(n log n) insertions and O(n log n) lookups. * It shares memory with previous "versions" of the tree When we insert a node in the tree, only the parts of the tree that need to be rebalanced are newly allocated. Parts of the tree that don't need to be rebalanced are not reallocated, so "new trees" are able to share memory with old trees. This is in contrast to a sorted set where we would have to duplicate the set, and also resort the set on each insertion. I've added a new stat to RubyVM.stat so we can understand how the red black tree increases.
* Revert "shape.h: Make attr_index_t uint8_t"Katherine Oelsner2023-10-181-2/+2
| | | | This reverts commit e3afc212ec059525fe4e5387b2a3be920ffe0f0e.
* shape.h: Make attr_index_t uint8_tJean Boussier2023-10-111-2/+2
| | | | | | | | | | Given `SHAPE_MAX_NUM_IVS 80`, we transition to TOO_COMPLEX way before we could overflow a 8bit counter. This reduce the size of `rb_shape_t` from 32B to 24B. If we decide to raise `SHAPE_MAX_NUM_IVS` we can always increase that type again.
* Refactor rb_shape_transition_shape_capa to not accept capacityJean Boussier2023-10-101-2/+2
| | | | | This way the groth factor is encapsulated, which allows rb_shape_transition_shape_capa to be smarter about ideal sizes.
* Use reference counting to avoid memory leak in kwargsHParker2023-10-011-0/+1
| | | | | | | | Tracks other callinfo that references the same kwargs and frees them when all references are cleared. [bug #19906] Co-authored-by: Peter Zhu <peter@peterzhu.ca>
* [Bug #19896]Adam Hess2023-09-221-3/+3
| | | | | | | | | fix memory leak in vm_method This introduces a unified reference_count to clarify who is referencing a method. This also allows us to treat the refinement method as the def owner since it counts itself as a reference Co-authored-by: Peter Zhu <peter@peterzhu.ca>
* YJIT: Compile exception handlers (#8171)Takashi Kokubun2023-08-081-2/+2
| | | Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Remove __bp__ and speed-up bmethod calls (#8060)Alan Wu2023-07-171-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove rb_control_frame_t::__bp__ and optimize bmethod calls This commit removes the __bp__ field from rb_control_frame_t. It was introduced to help MJIT, but since MJIT was replaced by RJIT, we can use vm_base_ptr() to compute it from the SP of the previous control frame instead. Removing the field avoids needing to set it up when pushing new frames. Simply removing __bp__ would cause crashes since RJIT and YJIT used a slightly different stack layout for bmethod calls than the interpreter. At the moment of the call, the two layouts looked as follows: ┌────────────┐ ┌────────────┐ │ frame_base │ │ frame_base │ ├────────────┤ ├────────────┤ │ ... │ │ ... │ ├────────────┤ ├────────────┤ │ args │ │ args │ ├────────────┤ └────────────┘<─prev_frame_sp │ receiver │ prev_frame_sp─>└────────────┘ RJIT & YJIT interpreter Essentially, vm_base_ptr() needs to compute the address to frame_base given prev_frame_sp in the diagrams. The presence of the receiver created an off-by-one situation. Make the interpreter use the layout the JITs use for iseq-to-iseq bmethod calls. Doing so removes unnecessary argument shifting and vm_exec_core() re-entry from the interpreter, yielding a speed improvement visible through `benchmark/vm_defined_method.yml`: patched: 7578743.1 i/s master: 4796596.3 i/s - 1.58x slower C-to-iseq bmethod calls now store one more VALUE than before, but that should have negligible impact on overall performance. Note that re-entering vm_exec_core() used to be necessary for firing TracePoint events, but that's no longer the case since 9121e57a5f50bc91bae48b3b91edb283bf96cb6b. Closes ruby/ruby#6428
* Expose rb_hash_resurrectAaron Patterson2023-06-231-0/+4
| | | | This is for implementing the `duphash` instruction
* Unify length field for embedded and heap strings (#7908)Peter Zhu2023-06-061-2/+1
| | | | | | | | * Unify length field for embedded and heap strings The length field is of the same type and position in RString for both embedded and heap allocated strings, so we can unify it. * Remove RSTRING_EMBED_LEN
* Update RJIT to support newarray_sendAaron Patterson2023-04-181-0/+8
| | | | This also adds max / hash support
* Move `catch_except_p` to `compile_data`eileencodes2023-04-111-5/+4
| | | | | | | | | | | | | | The `catch_except_p` flag is used for communicating between parent and child iseq's that a throw instruction was emitted. So for example if a child iseq has a throw in it and the parent wants to catch the throw, we use this flag to communicate to the parent iseq that a throw instruction was emitted. This flag is only useful at compile time, it only impacts the compilation process so it seems to be fine to move it from the iseq body to the compile_data struct. Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
* Expose rb_sym_to_proc via RJITAaron Patterson2023-04-071-0/+4
| | | | This is needed for getblockparamproxy
* [Feature #19579] Remove !USE_RVARGC code (#7655)Peter Zhu2023-04-041-0/+5
| | | | | | | | | | | Remove !USE_RVARGC code [Feature #19579] The Variable Width Allocation feature was turned on by default in Ruby 3.2. Since then, we haven't received bug reports or backports to the non-Variable Width Allocation code paths, so we assume that nobody is using it. We also don't plan on maintaining the non-Variable Width Allocation code, so we are going to remove it.
* RJIT: Add --rjit-verify-ctx optionTakashi Kokubun2023-04-041-0/+1
|
* RJIT: Store type information in ContextTakashi Kokubun2023-04-021-0/+8
|
* RJIT: Support entry with different PCsTakashi Kokubun2023-04-021-8/+8
|
* RJIT: Support has_opt ISEQsTakashi Kokubun2023-04-021-0/+2
|
* RJIT: Simplify cfunc implementationTakashi Kokubun2023-04-021-0/+14
|
* RJIT: Simplify invokesuper implementationTakashi Kokubun2023-04-021-0/+2
|
* RJIT: Group blockarg exit reasonsTakashi Kokubun2023-04-021-4/+1
|
* RJIT: Support splat argsTakashi Kokubun2023-04-021-1/+2
|
* RJIT: Update exit reasonsTakashi Kokubun2023-04-021-0/+4
|
* Remove an unneeded function copyTakashi Kokubun2023-04-011-4/+4
|
* RJIT: Support rest argsTakashi Kokubun2023-04-011-0/+12
|
* RJIT: Fix has_rest exit conditionsTakashi Kokubun2023-04-011-1/+1
|
* RJIT: Remove unused countersTakashi Kokubun2023-04-011-12/+3
|
* RJIT: Start moving away from VM-like ISEQ handlingTakashi Kokubun2023-04-011-4/+29
|