aboutsummaryrefslogtreecommitdiffstats
path: root/vm_backtrace.c
Commit message (Collapse)AuthorAgeFilesLines
* Get rid of useless dsize functionsJean Boussier2023-11-211-14/+11
| | | | | If we always return 0, we might as well not define the function at all.
* Embed Thread::Backtrace::Location into Thread::BacktraceÉtienne Barrié2023-11-201-16/+10
| | | | Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
* Embed Backtrace objectsJean Boussier2023-11-101-11/+10
| | | | | | | | | | rb_backtrace_t is 32B, so it fits well in a 80B slot. There is some unused spaces but given Backtrace objects are rarely held onto it should be inconsequential and avoid the malloc churn. Co-Authored-By: Étienne Barrié <etienne.barrie@gmail.com>
* Embed Backtrace::Location objectsJean Boussier2023-11-101-4/+3
| | | | | | | | | | The struct is 16B, so they will use the 80B size pool, so on paper it wastes 80 - 32 - 16 = 52B, however most malloc implementations will either pad sizes or use an extra 16B for each segment, so in practice the waste isn't that big. Also `Backtrace::Location` are rarely held on for long, so avoiding the malloc churn help performance. Co-Authored-By: Étienne Barrié <etienne.barrie@gmail.com>
* [Feature #10602] Add new API rb_profile_thread_frames()Daisuke Aritomo2023-10-311-3/+16
| | | | | | | | | | | | | | | | | | | | Add a new API rb_profile_thread_frames(), which is essentialy a per-thread version of rb_profile_frames(). While the original rb_profile_frames() always returns results about the current active thread obtained by GET_EC(), this new API takes a Thread to be profiled as an argument. This should come in handy when profiling I/O-bound programs such as webapps, since this new API allows us to learn about Threads performing I/O (which do not have the GVL). Profiling worker threads (such as Sidekiq workers) may be another application. Implements [Feature #10602] Co-authored-by: Mike Perham <mike@perham.net>
* Dump backtraces to an arbitrary streamNobuyoshi Nakada2023-09-251-9/+16
|
* Return line 0 for JIT framesAaron Patterson2023-09-151-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | Frames pushed by YJIT have an unreliable PC. The PC could be garbage, and if we try to read the line number with a garbage PC, then the program can crash. This commit returns line 0 for programs where there is a `jit_return` function. If `jit_return` has been set then this frame was pushed by the JIT, and we cannot trust the PC. Here is a debugger session for a program that crashed due to a broken PC: ``` (lldb) p ruby_current_vm_ptr->ractor.main_thread->ec->cfp->iseq->body->iseq_encoded (VALUE *) $0 = 0x0000000118a30e00 (lldb) p/x ruby_current_vm_ptr->ractor.main_thread->ec->cfp->pc (const VALUE *) $1 = 0x0000600000b02d00 (lldb) p/x ruby_current_vm_ptr->ractor.main_thread->ec->cfp->jit_return (void *) $2 = 0x000000010622942c ``` You can see the PC is completely out of range, but there is a `jit_return` pointer so we can avoid this crash.
* Don't check for null pointer in calls to freePeter Zhu2023-06-301-1/+1
| | | | | | | | According to the C99 specification section 7.20.3.2 paragraph 2: > If ptr is a null pointer, no action occurs. So we do not need to check that the pointer is a null pointer.
* Suppress -Wsign-compare warningNobuyoshi Nakada2023-03-231-1/+4
|
* Stop exporting symbols for MJITTakashi Kokubun2023-03-061-2/+2
|
* Encapsulate RCLASS_ATTACHED_OBJECTJean Boussier2023-02-151-1/+2
| | | | | | | | | Right now the attached object is stored as an instance variable and all the call sites that either get or set it have to know how it's stored. It's preferable to hide this implementation detail behind accessors so that it is easier to change how it's stored.
* Use write barriers for Backtrace objectsJohn Hawthorn2023-02-071-5/+6
| | | | | | | | | Backtrace objects hold references to: * iseqs - via the captured locations * strary - a lazily allocated array of strings * locary - a lazily allocated array of backtrace locations Co-authored-by: Adam Hess <HParker@github.com>
* Implement Write Barrier for Backtrace::LocationJean Boussier2023-02-031-2/+2
| | | | It only has a single reference, set in a single place.
* Return 0 if there is no CFP on the EC yetAaron Patterson2023-01-121-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | StackProf uses a signal handler to call `rb_profile_frames`. Signals are delivered to threads randomly, and can be delivered after the thread has been created but before the CFP has been established on the EC. This commit returns early if there is no CFP to use. Here is some info from the core files we are seeing. Here you can see the CFP on the current EC is 0x0: ``` (gdb) p ruby_current_ec $20 = (struct rb_execution_context_struct *) 0x7f3481301b50 (gdb) p ruby_current_ec->cfp $21 = (rb_control_frame_t *) 0x0 ``` Here is where VM_FRAME_CFRAME_P gets a 0x0 CFP: ``` 6 VM_FRAME_CFRAME_P (cfp=0x0) at vm_core.h:1350 7 VM_FRAME_RUBYFRAME_P (cfp=<optimized out>) at vm_core.h:1350 8 rb_profile_frames (start=0, limit=2048, buff=0x7f3493809590, lines=0x7f349380d590) at vm_backtrace.c:1587 ``` Down the stack we can see this is happening after thread creation: ``` 19 0x00007f3495bf9420 in <signal handler called> () at /lib/x86_64-linux-gnu/libpthread.so.0 20 0x000055d531574e55 in thread_start_func_2 (th=<optimized out>, stack_start=<optimized out>) at thread.c:676 21 0x000055d531575b31 in thread_start_func_1 (th_ptr=<optimized out>) at thread_pthread.c:1170 22 0x00007f3495bed609 in start_thread (arg=<optimized out>) at pthread_create.c:477 23 0x00007f3495b12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 ```
* add debug context APIs for debuggers (frame depth)Koichi Sasada2022-11-251-7/+32
| | | | | | | | | | | | | The following new debug context APIs are for implementing debugger's `next` (step over) and similar functionality. * `rb_debug_inspector_frame_depth(dc, index)` returns `index`-th frame's depth. * `rb_debug_inspector_current_depth()` returns current frame depth. The frame depth is not related to the frame index because debug context API skips some special frames but proposed `_depth()` APIs returns the count of all frames (raw depth).
* Fix typos (#6775)Yudai Takada2022-11-201-1/+1
| | | | | | | | | | | * s/Innteger/Integer/ * s/diretory/directory/ * s/Bufer/Buffer/ * s/defalt/default/ * s/covearge/coverage/
* push dummy frame for loading processKoichi Sasada2022-10-201-5/+7
| | | | | | | | | | | | | This patch pushes dummy frames when loading code for the profiling purpose. The following methods push a dummy frame: * `Kernel#require` * `Kernel#load` * `RubyVM::InstructionSequence.compile_file` * `RubyVM::InstructionSequence.load_from_binary` https://bugs.ruby-lang.org/issues/18559
* Rework vm_core to use `int first_lineno` struct member.Samuel Williams2022-09-261-2/+2
|
* Adjust styles [ci skip]Nobuyoshi Nakada2022-08-061-2/+4
|
* Fix `rb_profile_frames` output includes dummy main thread frameIvo Anjo2022-07-261-0/+3
| | | | | | | | | | | | | | | | | | | | The `rb_profile_frames` API did not skip the two dummy frames that each thread has at its beginning. This was unlike `backtrace_each` and `rb_ec_parcial_backtrace_object`, which do skip them. This does not seem to be a problem for non-main thread frames, because both `VM_FRAME_RUBYFRAME_P(cfp)` and `rb_vm_frame_method_entry(cfp)` are NULL for them. BUT, on the main thread `VM_FRAME_RUBYFRAME_P(cfp)` was true and thus the dummy thread was still included in the output of `rb_profile_frames`. I've now made `rb_profile_frames` skip this extra frame (like `backtrace_each` and friends), as well as add a test that asserts the size and contents of `rb_profile_frames`. Fixes [Bug #18907] (<https://bugs.ruby-lang.org/issues/18907>)
* Expand tabs [ci skip]Takashi Kokubun2022-07-211-193/+193
| | | | [Misc #18891]
* Write Thread instead of ThreadeKaíque Kandy Koga2022-05-121-1/+1
|
* Add ISEQ_BODY macroPeter Zhu2022-03-241-13/+13
| | | | | | Use ISEQ_BODY macro to get the rb_iseq_constant_body of the ISeq. Using this macro will make it easier for us to change the allocation strategy of rb_iseq_constant_body when using Variable Width Allocation.
* Avoid unnecessary conditionalJeremy Evans2022-03-091-2/+4
| | | | | | All frames should be either iseq frames or cfunc frames. Use a VM assert instead of a conditional to check for a cfunc frame if the current frame is not an iseq frame.
* Fix compiler warning for uninitialized variablePeter Zhu2022-02-221-1/+1
| | | | | | | Fixes this compiler warning: warning: 'loc' may be used uninitialized in this function [-Wmaybe-uninitialized] bt_yield_loc(loc - cfunc_counter, cfunc_counter, btobj);
* Add Thread.each_caller_locationJeremy Evans2022-02-171-5/+37
| | | | | | | | This method takes a block and yields Thread::Backtrace::Location objects to the block. It does not take arguments, and always starts at the default frame that caller_locations would start at. Implements [Feature #16663]
* [DOC] Document Thread::Backtrace.limitzverok2021-12-201-1/+55
|
* Make RubyVM::AbstractSyntaxTree.of raise for backtrace location in evalYusuke Endoh2021-12-191-7/+12
| | | | | | | | | This check is needed to fix a bug of error_highlight when NameError occurred in eval'ed code. https://github.com/ruby/error_highlight/pull/16 The same check for proc/method has been already introduced since 64ac984129a7a4645efe5ac57c168ef880b479b2.
* ast.c: Use kept script_lines data instead of re-opening the source file (#5019)Yusuke Endoh2021-10-261-11/+11
| | | ast.c: Use kept script_lines data instead of re-open the source file
* Using NIL_P macro instead of `== Qnil`S.H2021-10-031-3/+3
|
* Make backtrace generation work outward from current frameJeremy Evans2021-08-061-295/+181
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes multiple bugs found in the partial backtrace optimization added in 3b24b7914c16930bfadc89d6aff6326a51c54295. These bugs occurs when passing a start argument to caller where the start argument lands on a iseq frame without a pc. Before this commit, the following code results in the same line being printed twice, both for the #each method. ```ruby def a; [1].group_by { b } end def b; puts(caller(2, 1).first, caller(3, 1).first) end a ``` After this commit and in Ruby 2.7, the lines are different, with the first line being for each and the second for group_by. Before this commit, the following code can either segfault or result in an infinite loop: ```ruby def foo caller_locations(2, 1).inspect # segfault caller_locations(2, 1)[0].path # infinite loop end 1.times.map { 1.times.map { foo } } ``` After this commit, this code works correctly. This commit completely refactors the backtrace handling. Instead of processing the backtrace from the outermost frame working in, process it from the innermost frame working out. This is much faster for partial backtraces, since you only access the control frames you need to in order to construct the backtrace. To handle cfunc frames in the new design, they start out with no location information. We increment a counter for each cfunc frame added. When an iseq frame with pc is accessed, after adding the iseq backtrace location, we use the location for the iseq backtrace location for all of the directly preceding cfunc backtrace locations. If the last backtrace line is a cfunc frame, we continue scanning for iseq frames until the end control frame, and use the location information from the first one for the trailing cfunc frames in the backtrace. As only rb_ec_partial_backtrace_object uses the new backtrace implementation, remove all of the function pointers and inline the functions. This makes the process easier to understand. Restore the Ruby 2.7 implementation of backtrace_each and use it for all the other functions that called backtrace_each other than rb_ec_partial_backtrace_object. All other cases requested the entire backtrace, so there is no advantage of using the new algorithm for those. Additionally, there are implicit assumptions in the other code that the backtrace processing works inward instead of outward. Remove the cfunc/iseq union in rb_backtrace_location_t, and remove the prev_loc member for cfunc. Both cfunc and iseq types can now have iseq and pc entries, so the location information can be accessed the same way for each. This avoids the need for a extra backtrace location entry to store an iseq backtrace location if the final entry in the backtrace is a cfunc. This is also what fixes the segfault and infinite loop issues in the above bugs. Here's Ruby pseudocode for the new algorithm, where start and length are the arguments to caller or caller_locations: ```ruby end_cf = VM.end_control_frame.next cf = VM.start_control_frame size = VM.num_control_frames - 2 bt = [] cfunc_counter = 0 if length.nil? || length > size length = size end while cf != end_cf && bt.size != length if cf.iseq? if cf.instruction_pointer? if start > 0 start -= 1 else bt << cf.iseq_backtrace_entry cf_counter.times do |i| bt[-1 - i].loc = cf.loc end cfunc_counter = 0 end end elsif cf.cfunc? if start > 0 start -= 1 else bt << cf.cfunc_backtrace_entry cfunc_counter += 1 end end cf = cf.prev end if cfunc_counter > 0 while cf != end_cf if (cf.iseq? && cf.instruction_pointer?) cf_counter.times do |i| bt[-i].loc = cf.loc end end cf = cf.prev end end ``` With the following benchmark, which uses a call depth of around 100 (common in many Ruby applications): ```ruby class T def test(depth, &block) if depth == 0 yield self else test(depth - 1, &block) end end def array Array.new end def first caller_locations(1, 1) end def full caller_locations end end t = T.new t.test((ARGV.first || 100).to_i) do Benchmark.ips do |x| x.report ('caller_loc(1, 1)') {t.first} x.report ('caller_loc') {t.full} x.report ('Array.new') {t.array} x.compare! end end ``` Results before commit: ``` Calculating ------------------------------------- caller_loc(1, 1) 281.159k (_ 0.7%) i/s - 1.426M in 5.073055s caller_loc 15.836k (_ 2.1%) i/s - 79.450k in 5.019426s Array.new 1.852M (_ 2.5%) i/s - 9.296M in 5.022511s Comparison: Array.new: 1852297.5 i/s caller_loc(1, 1): 281159.1 i/s - 6.59x (_ 0.00) slower caller_loc: 15835.9 i/s - 116.97x (_ 0.00) slower ``` Results after commit: ``` Calculating ------------------------------------- caller_loc(1, 1) 562.286k (_ 0.8%) i/s - 2.858M in 5.083249s caller_loc 16.402k (_ 1.0%) i/s - 83.200k in 5.072963s Array.new 1.853M (_ 0.1%) i/s - 9.278M in 5.007523s Comparison: Array.new: 1852776.5 i/s caller_loc(1, 1): 562285.6 i/s - 3.30x (_ 0.00) slower caller_loc: 16402.3 i/s - 112.96x (_ 0.00) slower ``` This shows that the speed of caller_locations(1, 1) has roughly doubled, and the speed of caller_locations with no arguments has improved slightly. So this new algorithm is significant faster, much simpler, and fixes bugs in the previous algorithm. Fixes [Bug #18053]
* Using RBOOL macroS.H2021-08-021-6/+1
|
* Enable USE_ISEQ_NODE_ID by defaultYusuke Endoh2021-06-181-5/+5
| | | | | | | | ... which is formally called EXPERIMENTAL_ISEQ_NODE_ID. See also ff69ef27b06eed1ba750e7d9cab8322f351ed245. https://bugs.ruby-lang.org/issues/17930
* Make it possible to get AST::Node from Thread::Backtrace::LocationYusuke Endoh2021-06-181-3/+66
| | | | | | | | | RubyVM::AST.of(Thread::Backtrace::Location) returns a node that corresponds to the location. Typically, the node is a method call, but not always. This change also includes iseq's dump/load support of node_ids for each instructions.
* Remove LOCATION_TYPE_ISEQ_CALCED state from Backtrace::LocationYusuke Endoh2021-06-181-35/+14
| | | | | | | | | | | | | | | | | Previously Backtrace::Location had two possible states: LOCATION_TYPE_ISEQ and LOCATION_TYPE_ISEQ_CALCED. The former had the location information as PC, and the latter had it as lineno. Once lineno was caluculated, the state was changed to LOCATION_TYPE_ISEQ_CALCED and the caluculated result was kept. This change removes LOCATION_TYPE_ISEQ_CALCED, so lineno is calculated whenever it is needed. It will be slow a little, but lineno is typically needed only when its backtrace is shown, so I believe that it does not matter. This is a preparation to add column information to Backtrace::Location because PC is needed to caluculate node_id for AST::Node even after lineno is calculated. This change is approved by ko1.
* Adjust styles [ci skip]Nobuyoshi Nakada2021-06-171-6/+11
| | | | | | | | | * --braces-after-func-def-line * --dont-cuddle-else * --procnames-start-lines * --space-after-for * --space-after-if * --space-after-while
* Ensure that caller respects the start argumentJeremy Evans2021-03-241-2/+50
| | | | | | | | | | | | | | | | | | | | | | | Previously, if there were ignored frames (iseq without pc), we could go beyond the requested start frame. This has two changes: 1) Ensure that we don't look beyond the start frame by using last_cfp = RUBY_VM_PREVIOUS_CONTROL_FRAME(last_cfp) until the desired start frame is reached. 2) To fix the failures caused by change 1), which occur when a limited number of frames is requested, scan the VM stack before allocating backtrace frames, looking for ignored frames. This is complicated if there are ignored frames before and after the start, in which case we need to scan until the start frame, and then scan backwards, decrementing the start value until we get to the point where start will result in the number of requested frames. This fixes a Rails test failure. Jean Boussier was able to to produce a failing test case outside of Rails. Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
* Fix backtrace to not skip frames with iseq without pcJeremy Evans2021-02-191-7/+9
| | | | | | | | | | | | | | | | | | Previously, frames with iseq but no pc were skipped (even before the refactoring in 3b24b7914c16930bfadc89d6aff6326a51c54295). Because the entire backtrace was procesed before the refactoring, this was handled by using later frames instead. However, after the refactoring, we need to handle those frames or they get lost. Keep two iteration counters when iterating, one for the desired backtrace size (so we generate the desired number of frames), and one for the actual backtrace size (so we don't process off the end of the stack). When skipping over an iseq frame with no pc, decrement the counter for the desired backtrace, so it will continue to process the expected number of backtrace frames. Fixes [Bug #17581]
* Added Thread::Backtrace.limit [Feature #17479]Nobuyoshi Nakada2021-02-151-0/+8
|
* Ignore <internal: entries from core library methods for Kernel#warn(message, ↵Benoit Daloze2020-10-261-9/+49
| | | | | | uplevel: n) * Fixes [Bug #17259]
* Document difference between Thread::Backtrace::Location#{,absolute_}pathJeremy Evans2020-09-231-2/+5
| | | | They are usually the same, except for locations in the main script.
* Improve performance of partial backtracesJeremy Evans2020-08-271-81/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, backtrace_each fully populated the rb_backtrace_t with all backtrace frames, even if caller only requested a partial backtrace (e.g. Kernel#caller_locations(1, 1)). This changes backtrace_each to only add the requested frames to the rb_backtrace_t. To do this, backtrace_each needs to be passed the starting frame and number of frames values passed to Kernel#caller or #caller_locations. backtrace_each works from the top of the stack to the bottom, where the bottom is the current frame. Due to how the location for cfuncs is tracked using the location of the previous iseq, we need to store an extra frame for the previous iseq if we are limiting the backtrace and final backtrace frame (the first one stored) would be a cfunc and not an iseq. To limit the amount of work in this case, while scanning until the start of the requested backtrace, for each iseq, store the cfp. If the first backtrace frame we care about is a cfunc, use the stored cfp to find the related iseq. Use a function pointer to handle the storage of the cfp in the iteration arg, and also store the location of the extra frame in the iteration arg. backtrace_each needs to return int instead of void in order to signal when a starting frame larger than backtrace size is given, as caller and caller_locations needs to return nil and not the empty array in these cases. To handle cases where a range is provided with a negative end, and the backtrace size is needed to calculate the result to pass to rb_range_beg_len, add a backtrace_size static function to calculate the size, which copies the logic from backtrace_each. As backtrace_each only adds the backtrace lines requested, backtrace_to_*_ary can be simplified to always operate on the entire backtrace. Previously, caller_locations(1,1) was about 6.2 times slower for an 800 deep callstack compared to an empty callstack. With this new approach, it is only 1.3 times slower. It will always be somewhat slower as it still needs to scan the cfps from the top of the stack until it finds the first requested backtrace frame. This initializes the backtrace memory to zero. I do not think this is necessary, as from my analysis, nothing during the setting of the backtrace entries can cause a garbage collection, but it seems the safest approach, and it's unlikely the performance decrease is significant. This removes the rb_backtrace_t backtrace_base member. backtrace and backtrace_base were initialized to the same value, and neither is modified, so it doesn't make sense to have two pointers. This also removes LOCATION_TYPE_IFUNC from vm_backtrace.c, as the value is never set. Fixes [Bug #17031]
* Expose ec -> backtrace (internal) and use it to implement fiber backtrace.Samuel Williams2020-08-181-0/+10
| | | | See <https://bugs.ruby-lang.org/issues/16815> for more details.
* Revert "Improve performance of partial backtraces"Jeremy Evans2020-08-121-159/+71
| | | | | | | This reverts commit f2d7461e85053cb084e10999b0b8019b0c29e66e. Some CI machines are reporting issues with backtrace_mark, so I'm going to revert this for now.
* Improve performance of partial backtracesJeremy Evans2020-08-121-71/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, backtrace_each fully populated the rb_backtrace_t with all backtrace frames, even if caller only requested a partial backtrace (e.g. Kernel#caller_locations(1, 1)). This changes backtrace_each to only add the requested frames to the rb_backtrace_t. To do this, backtrace_each needs to be passed the starting frame and number of frames values passed to Kernel#caller or #caller_locations. backtrace_each works from the top of the stack to the bottom, where the bottom is the current frame. Due to how the location for cfuncs is tracked using the location of the previous iseq, we need to store an extra frame for the previous iseq if we are limiting the backtrace and final backtrace frame (the first one stored) would be a cfunc and not an iseq. To limit the amount of work in this case, while scanning until the start of the requested backtrace, for each iseq, store the cfp. If the first backtrace frame we care about is a cfunc, use the stored cfp to find the related iseq. Use a function pointer to handle the storage of the cfp in the iteration arg, and also store the location of the extra frame in the iteration arg. backtrace_each needs to return int instead of void in order to signal when a starting frame larger than backtrace size is given, as caller and caller_locations needs to return nil and not the empty array in these cases. To handle cases where a range is provided with a negative end, and the backtrace size is needed to calculate the result to pass to rb_range_beg_len, add a backtrace_size static function to calculate the size, which copies the logic from backtrace_each. As backtrace_each only adds the backtrace lines requested, backtrace_to_*_ary can be simplified to always operate on the entire backtrace. Previously, caller_locations(1,1) was about 6.2 times slower for an 800 deep callstack compared to an empty callstack. With this new approach, it is only 1.3 times slower. It will always be somewhat slower as it still needs to scan the cfps from the top of the stack until it finds the first requested backtrace frame. Fixes [Bug #17031]
* vm_backtrace.c: let rb_profile_frames show cfunc framesYusuke Endoh2020-07-281-4/+63
| | | | | ... in addition to normal iseq frames. It is sometimes useful to point the bottleneck more precisely.
* Add compaction support for backtrace objectsAaron Patterson2020-05-071-4/+32
| | | | This just introduces compaction support for backtrace objects.
* Fix rb_profile_frame_classpath to handle module singletonsJean Boussier2020-05-071-1/+1
| | | | | | Right now `SomeClass.method` is properly named, but `SomeModule.method` is displayed as `#<Module:0x000055eb5d95adc8>.method` which makes profiling annoying.
* decouple internal.h headers卜部昌平2019-12-261-5/+5
| | | | | | | | | | | | | | | | | | Saves comitters' daily life by avoid #include-ing everything from internal.h to make each file do so instead. This would significantly speed up incremental builds. We take the following inclusion order in this changeset: 1. "ruby/config.h", where _GNU_SOURCE is defined (must be the very first thing among everything). 2. RUBY_EXTCONF_H if any. 3. Standard C headers, sorted alphabetically. 4. Other system headers, maybe guarded by #ifdef 5. Everything else, sorted alphabetically. Exceptions are those win32-related headers, which tend not be self- containing (headers have inclusion order dependencies).
* Create backtrace location array directlyNobuyoshi Nakada2019-12-131-3/+3
|