Posts

Showing posts from March, 2020

Why C/C++ is still alive From last 40 years ?

Short Brief C is a general-purpose programming language developed in Bell Labs by Dennis Ritchie. After it, C++ is born in 1979 by Bjarne Stroustrup as an extension of C containing Classes. Since 1972 until now they have had some release versions. For example, the last release of C is C18 and C++20 for CPlusPlus. Who would think that they continue as an important programming language about 4 decades? Perhaps no one. Some reasons Nowadays, almost all programmer learn C or C++ as a fundamental skill. There are some reasons describing why they are still alive. I want to show some of the reasons here. Compilers All top corporation compiler like Microsoft, Intel, IBM and others like GCC, LLVM Clang support C and C++. Supporting C & CPlusPlus helps communities and programmers to develop their product confidently by them. Because of this, they remain as an important language although new programming languages bring new facilities to us. Speed & Footprint According to s

Using a reversible debugger to recover from stack-corruption

If a program overwrites its own program counter register, it is almost impossible to recover using a conventional debugger – without the program counter, the debugger cannot figure out which function the program was running, and so cannot even give any useful information about what is on the stack or where the code was immediately before the stack was corrupted. This makes debugging pretty much impossible. With a reversible debugger however, recovery is almost comically simple; simply do: reverse-step – to rewind one instruction, and the state of the program will move back to the instruction that corrupted the program counter, allowing you to see what’s gone wrong, and also allowing the debugger to know what function was running and so be able to interpret the stack and display it to you in a useful way. You can replay your code and consequently find the issue in order to then debug and fix it quickly. For example, in this program, the function foo overwrites its stack with zer

Getting backtrace during stack corruption

As we know , if program crashes and if it results in stack corruption, GDB will never give backtrace of the crashes.Even coredump does not help. Consider the below situation of stack corruption, Is it possible to make out anything useful from this for debugging?No Program received signal SIGSEGV , Segmentation fault . 0x00000002 in ?? () ( gdb ) bt #0 0x00000002 in ?? () #1 0x00000001 in ?? () #2 0xbffff284 in ?? () Backtrace stopped : previous frame inner to this frame ( corrupt stack ?) ( gdb ) Those bogus adresses (0x00000002 and the like) are actually PC values, not SP values. Now, when you get this kind of SEGV, with a bogus (very small) PC address, 99% of the time it's due to calling through a bogus function pointer. Note that virtual calls in C++ are implemented via function pointers, so any problem with a virtual call can manifest in the same way. An indirect call instruction just pushes the PC after the call onto the stack and then sets the PC

Generate backtrace automatically when program crashes due to SIGSEGV

For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual. Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler: #include <stdio.h> #include <execinfo.h> #include <signal.h> #include <stdlib.h> #include <unistd.h> void handler(int sig) {   void *array[10];   size_t size;   // get void*'s for all entries on the stack   size = backtrace(array, 10);   // print out all the frames to stderr   fprintf(stderr, "Error: signal %d:\n", sig);   backtrace_symbols_fd(array, size, STDERR_FILENO);   exit(1); } void baz() {  int *foo = (int*)-1; // make a bad pointer   printf("%d\n", *

Debugging Memory leaks in Large applications and libraries in Linux

Memory leaks is a disaster in Embedded Linux Applications. If leak happens at func1 at file1.c the application may crash in func100 at file10.c. It very hard to detect and the fix issues due to Memory leaks. If your application size is of small, it can be easily identified by code review. If there are 100s of files and 1000s of functions, the best tool is valgrind. Compile your source with -g option to enable the debugging  symbols and run the application through valgrind. ex:  valgrind  --leak-check=all <myprog>  --num-callers=100. Check the report after complete run and go through invalid writes and correct them.. Thats it!!! which can save huge amount of time spent in source code walkthrough!!!

Analyzing Crash dumps in Linux using GDB

GDB is a common tool for debugging applications in Linux development environment. Before generating the core dump use -g compiler option for compiling the application for enabling debugging symbols. 1. set ulimit -c unlimited  in cmd to enable core dump. 2. Then run the application, after application crashes it generates code dump with core.pid file name. 3. Load the application in gdb ex: gdb <path to application> 4. In this step load the core dump by using core-file <path to core dump> command 5.Just use bt for getting the backtrace of the crash which will give you exact location where app has crashed. Use GDB instead of logging which will save your time for debugging !!!!