使用GCC编译器的ARM内核的堆栈回溯(当有MSP到PSP的切换时)



核心-ARM Cortex-M4

编译器-GCC 5.3.0 ARM EABI

OS-免费RTOS

我正在使用gcc库函数_Unwind_Reason_Code_Unwind_backtrace(_Unwind_Trace_Fn,void*)进行堆栈回溯;

在我们的项目中,MSP堆栈用于异常处理。在其他情况下,使用PSP堆栈。当我在异常处理程序内部调用_Unwind_Backtrace()时,我能够正确地将跟踪备份到异常内部调用的第一个函数。在此之前,堆栈是MSP。

但在出现异常之前,我们无法进行回溯。在这一点上,使用的堆栈是PSP。

例如:假设

Task1
{
func1()
}

func1
{
func2()
}
func2
{
an exception occurs here
}
**Inside Exception**
{
func1ex()
}
func1ex
{
func2ex()
}

func2ex
{
unwind backtrace()
}

Unwind backtrace可以回溯到func1ex(),但不能回溯路径task1->func1->func2

因为在异常期间PSP到MSP堆栈之间存在切换,所以它无法回溯使用PSP的函数。

在控制进入异常处理程序之前,寄存器R0、R1、R2、R3、LR、PC和XPSR由内核堆叠在PSP中。我能够看到这一点。但我不知道如何使用这个堆栈框架为PSP做回溯。

有人能告诉我们在这种情况下该怎么办,这样我们就可以追溯到任务级别吗?

谢谢,

阿什温。

这是可行的,但需要访问libgcc如何实现_Unwind_Backtrace函数的内部详细信息。幸运的是,该代码是开源的,但依赖于这样的内部细节是脆弱的,因为它可能会在没有任何通知的情况下在armgcc的未来版本中崩溃。

通常,通过读取执行回溯的libgcc的源代码,它会创建CPU核心寄存器的内存内虚拟表示,然后使用此表示向上遍历堆栈,模拟异常抛出。_Unwind_Backtrace所做的第一件事是从当前CPU寄存器填充此上下文,然后调用内部实现函数。

在大多数情况下,从堆栈异常结构手动创建该上下文足以伪造从处理程序模式向上通过调用堆栈的回溯。以下是一些示例代码(来自https://github.com/bakerstu/openmrn/blob/62683863e8621cef35e94c9dcfe5abcaf996d7a2/src/freertos_drivers/common/cpu_profile.hxx#L162):

/// This struct definition mimics the internal structures of libgcc in
/// arm-none-eabi binary. It's not portable and might break in the future.
struct core_regs
{
unsigned r[16];
};
/// This struct definition mimics the internal structures of libgcc in
/// arm-none-eabi binary. It's not portable and might break in the future.
typedef struct
{
unsigned demand_save_flags;
struct core_regs core;
} phase2_vrs;
/// We store what we know about the external context at interrupt entry in this
/// structure.
phase2_vrs main_context;
/// Saved value of the lr register at the exception entry.
unsigned saved_lr;
/// Takes registers from the core state and the saved exception context and
/// fills in the structure necessary for the LIBGCC unwinder.
void fill_phase2_vrs(volatile unsigned *fault_args)
{
main_context.demand_save_flags = 0;
main_context.core.r[0] = fault_args[0];
main_context.core.r[1] = fault_args[1];
main_context.core.r[2] = fault_args[2];
main_context.core.r[3] = fault_args[3];
main_context.core.r[12] = fault_args[4];
// We add +2 here because first thing libgcc does with the lr value is
// subtract two, presuming that lr points to after a branch
// instruction. However, exception entry's saved PC can point to the first
// instruction of a function and we don't want to have the backtrace end up
// showing the previous function.
main_context.core.r[14] = fault_args[6] + 2;
main_context.core.r[15] = fault_args[6];
saved_lr = fault_args[5];
main_context.core.r[13] = (unsigned)(fault_args + 8); // stack pointer
}
extern "C"
{
_Unwind_Reason_Code __gnu_Unwind_Backtrace(
_Unwind_Trace_Fn trace, void *trace_argument, phase2_vrs *entry_vrs);
}
/// Static variable for trace_func.
void *last_ip;
/// Callback from the unwind backtrace function.
_Unwind_Reason_Code trace_func(struct _Unwind_Context *context, void *arg)
{
void *ip;
ip = (void *)_Unwind_GetIP(context);
if (strace_len == 0)
{
// stacktrace[strace_len++] = ip;
// By taking the beginning of the function for the immediate interrupt
// we will attempt to coalesce more traces.
// ip = (void *)_Unwind_GetRegionStart(context);
}
else if (last_ip == ip)
{
if (strace_len == 1 && saved_lr != _Unwind_GetGR(context, 14))
{
_Unwind_SetGR(context, 14, saved_lr);
allocator.singleLenHack++;
return _URC_NO_REASON;
}
return _URC_END_OF_STACK;
}
if (strace_len >= MAX_STRACE - 1)
{
++allocator.limitReached;
return _URC_END_OF_STACK;
}
// stacktrace[strace_len++] = ip;
last_ip = ip;
ip = (void *)_Unwind_GetRegionStart(context);
stacktrace[strace_len++] = ip;
return _URC_NO_REASON;
}
/// Called from the interrupt handler to take a CPU trace for the current
/// exception.
void take_cpu_trace()
{
memset(stacktrace, 0, sizeof(stacktrace));
strace_len = 0;
last_ip = nullptr;
phase2_vrs first_context = main_context;
__gnu_Unwind_Backtrace(&trace_func, 0, &first_context);
// This is a workaround for the case when the function in which we had the
// exception trigger does not have a stack saved LR. In this case the
// backtrace will fail after the first step. We manually append the second
// step to have at least some idea of what's going on.
if (strace_len == 1)
{
main_context.core.r[14] = saved_lr;
main_context.core.r[15] = saved_lr;
__gnu_Unwind_Backtrace(&trace_func, 0, &main_context);
}
unsigned h = hash_trace(strace_len, (unsigned *)stacktrace);
struct trace *t = find_current_trace(h);
if (!t)
{
t = add_new_trace(h);
}
if (t)
{
t->total_size += 1;
}
}
/// Change this value to runtime disable and enable the CPU profile gathering
/// code.
bool enable_profiling = 0;
/// Helper function to declare the CPU usage tick interrupt.
/// @param irq_handler_name is the name of the interrupt to declare, for example
/// timer4a_interrupt_handler.
/// @param CLEAR_IRQ_FLAG is a c++ statement or statements in { ... } that will
/// be executed before returning from the interrupt to clear the timer IRQ flag.
#define DEFINE_CPU_PROFILE_INTERRUPT_HANDLER(irq_handler_name, CLEAR_IRQ_FLAG) 
extern "C"                                                                 
{                                                                          
void __attribute__((__noinline__)) load_monitor_interrupt_handler(     
volatile unsigned *exception_args, unsigned exception_return_code) 
{                                                                      
if (enable_profiling)                                              
{                                                                  
fill_phase2_vrs(exception_args);                               
take_cpu_trace();                                              
}                                                                  
cpuload_tick(exception_return_code & 4 ? 0 : 255);                 
CLEAR_IRQ_FLAG;                                                    
}                                                                      
void __attribute__((__naked__)) irq_handler_name(void)                 
{                                                                      
__asm volatile("mov  r0, %0 n"                                    
"str  r4, [r0, 4*4] n"                             
"str  r5, [r0, 5*4] n"                             
"str  r6, [r0, 6*4] n"                             
"str  r7, [r0, 7*4] n"                             
"str  r8, [r0, 8*4] n"                             
"str  r9, [r0, 9*4] n"                             
"str  r10, [r0, 10*4] n"                           
"str  r11, [r0, 11*4] n"                           
"str  r12, [r0, 12*4] n"                           
"str  r13, [r0, 13*4] n"                           
"str  r14, [r0, 14*4] n"                           
:                                                   
: "r"(main_context.core.r)                          
: "r0");                                            
__asm volatile(" tst   lr, #4               n"                    
" ite   eq                   n"                    
" mrseq r0, msp              n"                    
" mrsne r0, psp              n"                    
" mov r1, lr n"                                    
" ldr r2,  =load_monitor_interrupt_handler  n"     
" bx  r2  n"                                       
:                                                   
:                                                   
: "r0", "r1", "r2");                                
}                                                                      
}

此代码旨在使用计时器中断获取CPU配置文件,但回溯展开可以从任何处理程序(包括故障处理程序)中重复使用。从下到上读取代码:

  • 重要的是,使用属性__naked__定义IRQ函数,否则GCC的函数入口标头将以不可预测的方式操纵CPU的状态,例如修改堆栈指针
  • 首先,我们保存不在异常条目结构中的所有其他核心寄存器。我们需要在一开始就从汇编开始做这件事,因为当它们用作临时寄存器时,通常会被后面的C代码修改
  • 然后,我们从中断之前重构堆栈指针;无论处理器以前是处于处理程序模式还是线程模式,代码都能工作。此指针是异常条目结构。这段代码不处理未对齐4字节的堆栈,但我从未见过armgcc这样做
  • 其余的代码在C/C++中,我们填写从libgcc中获取的内部结构,然后调用展开过程的内部实现。为了解决libgcc的某些假设,我们需要进行一些调整,这些假设在异常输入时不成立
  • 有一种特定的情况是展开不起作用,即异常发生在一个叶函数中,该函数在进入时没有将LR保存到堆栈中。当您尝试从进程模式执行回溯时,这种情况永远不会发生,因为被调用的回溯函数将确保调用函数不是叶。我试图通过在回溯过程中调整LR寄存器来应用一些变通方法,但我不相信它每次都有效。我对如何做得更好的建议很感兴趣

最新更新