Autotools:

We saw the make utility for building (compiling and generating executable) large programs. However, usually we write Makefile for compiiing programs for the system on which we are running make. i.e if I am working Linux OS, executable I generate is for Linux OS. However, that same executable will not work on Windows, and even on other flavors of Linux OS. This is because same compiler version, lib files, etc may not exist on Linux systems. So, we need different Makefile tailored for each platform. This will create too many Makefile each unique for each system.

To remedy this situation, we need to have a Makefile which will have #ifdef for different platform, and will generate executable differently for different platform (i.e if older version of gcc exists on some system, it may not support some flag. In that case, ifdef will help resolve such issues, and still let the Makefile build an executable). Having all these ifdef in Makefile and making sure that such pgm work on all arch is very tedious task. So, most of the software that is intended to be distributed for multiple platforms, use a tool called "Autotools". This tool automates the task of generating binary for different platforms.

Most of the Linux programs that we download, have their binary executable or source code. The programs which have their binary executable directly available, we do not need to do anything. We just run the executable and pgm starts running. However, programs which have their source code available, we usually need to run 3 steps as a end user to generate executable. This is called "GNU Build". Installer on your system unpacks the downloaded package (if tar.gz file, then use gunzip and tar, if .deb pkg, then use dpkg or apt, etc) and then runs these 3 steps:

  1. ./configure => analyzes your system to see what kind of programs and libraries you have, so it knows how to build the program best
  2. make => actual building is done using Makefile generated from above step (same way as we do using make with a Makefile)
  3. sudo make install => installs the pgm (puts pgm libs,binary,etc in appr dir with appr permissions). By default, it's put in /usr/local/ (bin in /usr/local/bin, lib in /usr/local/lib, etc)

These 3 steps are needed because over here Makefile gets generated differently for different platforms. ./configure generates a Makefile then make runs on this Makefile to generate executable, and then "install" puts generated executable in appropriate dir. We may never need to write a program that we distribute to other people, so you may wonder why learn Autotools. Reason is that most of the times we end up using these pgms, which requires us to run these steps. Having a brief understanding of this tool "Autotools" helps us when building (compiling and installing) 3rd party pgms on our system. Learning Autotools is a full time job in itself, so I'll just highlight few basic cmds with an example.

Full detailed tutorial for autotools here: https://www.lrde.epita.fr/~adl/autotools.html

Brief tutorial on this is: http://markuskimius.wikidot.com/programming:tut:autotools

Autotools is a collection of three tools:

  • Autoconf — This is used to generate the “configure” shell script. As I mentioned earlier, this is the script that analyzes your system at compile-time. For example, does your system use “cc” or “gcc” as the C compiler? Full Autoconf doc here: https://www.gnu.org/software/autoconf/manual/autoconf.html
  • Automake — This is used to generate Makefiles. It uses information provided by Autoconf. For example, if your system has “gcc”, it will use “gcc” in the Makefile. Or, if it finds “cc” instead, will use “cc” in the Makefile. Full automake docs here: https://www.gnu.org/software/automake/manual/automake.html
  • Libtool — This is used to create shared libraries, platform-independently. No need to know this as it's complicated topic for advanced users.

Autotools build process has some standard things in GNU build. Good to know these:

A. make options:

1. Std Makefile targets: make all, make install, make uninstall, make clean, make check=> To make targets when pkgs have been downloaded to your system

2. For making distribution: make dist (creates a tarball named *.tar.gz by collecting all src/other files, which is ready for distribution), make distcheck (to check the pkg for any errors/issues), make distclean.

3. staged installation. using DESTDIR, we can divert install step of "make install" to other dir than the ususal dir. Then we can choose and move files to whichever dir we want.

ex: make DESTDIR=~/scratch install

B. configure options: configure --help gives all options, few important ones are listed below.

1. Std Directory var: var=prefix (default is /usr/local). By chnaging value of this, we can put bin, lib, doc, etc in other dir.

ex: ./configure --prefix ~/user => puts bin "hello" in ~/usr/bin,/hello, etc.

2. Std configuration var: CC, CFLAGS, LDFLAGS, CPPFLAGS.

ex: ./configure CC=gcc3 .. => configure automatically chooses appr default values for these, but sometimes we may want to override defaults.

3. Parallel build tree: GNU build system has 2 trees: source tree and build tree. Source tree is the dir containing "configure" which has all src files. Build tree is dir where "./configure" is run creating object files and other intermediate files. Most of the times, we run "./configure" in same dir where configure is located, so source and build tree are same. But, if we want to keep our source files uncluttered from generated files, we can have build tree in separate by doing this:

ex: ~/.../top-dir-pkg (this is dir where you extracted files, and has configure script). "mkdir build", "cd build", run "../configure" and "make" in build dir. This keeps all generated files in build dir, keeping main source dir intact.

4. cross compilation: To generate binary for a different system that one where we are compiling the files. By default, binaries are generated for the same system, where we compile the files.

ex: ./configure --build=i68cpc --host=solaris => Here, build denotes our sytem, whereas host is the system for which we generate binaries. For binaries to get generated for host system, cross compiler has to exist on native system, else it will error out.

5. pgms can be renamed by using --program-prefix, --program-suffix (i.e instead of installing a pgm with name "tar", we can install it as "my-tartest, by using prefix=my-, suffix=test, to prevent overwriting "tar" that is already installed)

6.

Simple example of building a pkg: This ex shows how to build pkg like *.tar.gz from source files that can be distributed.

Autotools is installed by default. We can check version number of Autoconf/Automake by running autoconf/automake.

$ autoconf --version
autoconf (GNU Autoconf) 2.69

$ automake --version
automake (GNU automake) 1.13.4

$ autoreconf --version
autoreconf (GNU Autoconf) 2.69

ex1: write a C pgm (hello.c) that has gettimeofday function. We need to run autotools  in same dir as pgm hello.c. These are the steps for running autotools.  The whole goal of autotools is to generate 2 files: configure and Makefile.

0. write C pgm as below called hello.c:

#include <stdio.h>
#include <sys/time.h> // this is added on purpose, so that we can make this system dependent
 
int main(int argc, char* argv[])
{
   double sec;
   struct timeval tv;
 
   gettimeofday(&tv, NULL); // This function only exists in sys/time.h, so if that file doesn't exist, this will error out
   sec = tv.tv_sec;
   sec += tv.tv_usec / 1000000.0;
 
   printf("%f\n", sec);
 
   return 0;
}

1. autoconf + automake steps =>

  • Autoconf has a series of steps to generate configure script. configure is a very large bash script. Autoconf needs configure.ac as an i/p file to generate configure script.

    we can write configure.ac as below: AC_* and AM_* are M4 macros. AC_* are autoconf macros, while AM_* are automake macros.

    AC_PREREQ([2.69]) => Autoconf version number (optional)
    AC_INIT([hello-pkg], [1.0], [This email address is being protected from spambots. You need JavaScript enabled to view it.]) => pkg name, version, email addr for bug reporting
    AC_CONFIG_SRCDIR([hello.c])
    AM_INIT_AUTOMAKE([-Wall -Werror foreign]) => NOTE: this is automake macro (AM_*), not autoconf macro (AC_*). options inside are optional. We turn on all Warnings and report them as error by using -Wall and -Werror. -foreign allows us to proceed even w/o having files as README, AUTHORS, NEWS, etc. Else, automake will complain about these missing files and won't allow us to generate pkg. Also, if autoconf is run with this AM_* macro, it will error out as "undefined macro"
    AC_CONFIG_HEADERS([config.h]) => causes the configure script to create a config.h file gathering ‘#define’s defined by other macros in configure.ac. This config.h file can be included in hello.c file and then those defined strings in our program to make it portable.
    AC_PROG_CC => causes the configure script to search for a C compiler and define the variable CC with its name
    AC_CHECK_HEADERS([sys/time.h]) => checks for header files
    AC_CHECK_FUNCS([gettimeofday]) => checks for library func in src files
    AC_CONFIG_FILES([Makefile]) => list of all Makefiles that should be generated from Makefile.in file. If Makefile are in nested dir, provide all those here
    AC_OUTPUT => closing command that actually produces the part of the script in charge of creating the files registered with AC_CONFIG_HEADERS and AC_CONFIG_FILES (i.e config.h and Makefile)

  • Aytomake generates Makefile.in. It needs Makefile.am and configure.ac as i/p to generate Makefile.in.

    Makefile.am can be simple file specifying o/p binary file, and i/p C pgm as shown below.

    bin_PROGRAMS=hello
    hello_SOURCES=hello.c

2. autoreconf --install => With these 3 files above (hello.c, configure.ac, Makefile.am), we can now run autoconf on configure.ac to generate configure. Then we can run automake on Makefile.am and configure.ac to generate config.h.in and Makefile.in. But it will require lot of work to get it working. Autoreconf is a script that calls autoconf, automake, and a bunch of other commands in the right order. So, this is the preferred step instead of running autoconf and automake separately. This step creates configure, config.h.in, Makefile.in files. It also bunch of other files as install-sh, depcomp, missing, aclocal.m4 and dir autom4te.cache.

 3. configure => At his point build is complete. Steps 3 and 4 are what the user would run on any system that the package is downloaded to create executable. We run these steps here to check that everything runs OK. With 3 files (configure, config.h.in and Makefile.in) generated in step 2, running ./configure script (generated in step 2), creates Makefile (from Makefile.in) and config.h (from config.h.in). These 2 files have been created after probing the system, so these files are runnable on this system. There are also extra files created called config.status and config.log

4. make => running make generates executable hello (shows the actual steps). hello.c and hello will be the files generated by this step.

5. make install => We do not run this step as it will install binaries in appr dir, which we do not want on our system. Stpes 3,4,5 are run by folks downloading our pkg.

6. make distcheck => ceates final *.tar.gz distribution pkg as "hello-pkg-1.0.tar.gz"

Now that we have the final pkg, it can be dsitributed to anyone. However, our program is not yet portable for all systems, as there are function in our hello.c pgm that may not be present on some systems in the C library. config.h is the file that comes to our rescue here. It looks at all functions in pgm, and provides us with constants in form of "#define" that we can use to check if the system has that function in C library or not. For ex: looking in config.h, we see these lines:

/* Define to 1 if you have the `gettimeofday' function. */
#define HAVE_GETTIMEOFDAY 1

/* Define to 1 if you have the <sys/time.h> header file. */
#define HAVE_SYS_TIME_H 1

Now, in our C pgm, we can use these constants to check for the existence of these on that system. So, we modify our C pgm to make it portable.

Our modified C pgm looks like this:

#include <stdio.h>
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#else
#include <time.h>
#endif

int main(int argc, char* argv[])
{
   double sec;
 #ifdef HAVE_GETTIMEOFDAY
   struct timeval tv;
 
   gettimeofday(&tv, NULL);
   sec = tv.tv_sec;
   sec += tv.tv_usec / 1000000.0;
 #else
   sec = time(NULL);
#endif
   printf("%f\n", sec);
 
   return 0;
}

Now, since we modified our pgm, we need to rerun step 3 and 4 to make sure our pgm still compiles fine. Then we can run step 6 to create tar.gz that can be distributed.

----------

OPTIONAL: The steps below are alternate set of steps that are not recommended. But they are good to know, incase we do not want to run autoreconf, but instead plan to run autoconf and automake separately.

A. autoscan => generates configure.scan. It's a small file. It should look very similar to configure.ac file above. It has all autoconf macros  only(i.e AC_*). We will need to add automake macros to it (AM_*), Rename it as configure.ac to use it in flow above.

B. autoconf => uses configure.ac to generate configure. If needs config.h.in and Makefile.in . If we do not want to write config.h.in from scratch, we can use autoheader to generate config.h.in. Makefile.in contains very basic Makefile isntructions, which are used to generate Makefile.

C. autoheader => generates config.h.in. It just has few constants which are undefined.

D. automake =>  generates Makefile.in from Makefile.am.

E. aclocal => There will be lot of errors in automake step above. 1st set of errors will be Automake macros which aren't found in configure.ac. If we add these macros in configure.ac, then autoconf will freak out, since it doesn't know these macros. To remedy this, we provide defn of these macros in aclocal.m4. We run aclocal to creeate aclocal.m4 automatically with defn of all these macros in automake.

At his point, we have config.h.in and Makefile.in. So, configure script can run now, followed by make.

G. run configure: . ./configure => generates config.h from config.h.in, and Makefile from Makefile.in. config.h will look same as config.h.in, except that all constants are #define now. Makefile will look same as Makefile.in.

H. run make => Once Makefile is generated, we can run make. It uses Makefile to run 1st target "hello". make all => generates executable hello using rules in Makefile. Now we can run ./hello to get executable running.

 

GCC: GNU Compiler Collection

Before learning C or C++, we need to learn how to compile the C/C++ program. The program to compile C/C++ into machine code is call GCC. (GNU Compiler Collection). Very good pdf here (by Brian Gough) = https://tfetimes.com/wp-content/uploads/2015/09/An_Introduction_to_GCC-Brian_Gough.pdf

Installing GCC:

Check if gcc is installed by running "gcc -v" on your linux terminal.

gcc -v => shows "gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) " along with some other info.

GCC is installed by default on CentOS. However, on Linux Mint, you will get errors regarding various std lib not found, when trying to run GCC (even though gcc is installed). This is because not all the libs needed for gcc are installed. If you get error running gcc and gcc is already installed on your system, follow these steps ($ below represents the terminal prompt).

Debian based OS: (Linux Mint, Ubuntu etc): Run following 2 cmds:

$ sudo apt update => updates pkg data repository. Needed before you install anything

$ sudo apt install build-essential => build-essential package is a reference for all the packages needed to compile a Debian package. It generally includes the GCC/g++ compilers and libraries and some other utilities (as Make, etc).

Fedora based OS (RHEL, CentOS, etc): Run following 2 cmds: 

$ sudo yum makecache => makes sure that the yum cache is up to date with the latest metadata. (Not sure if we can use "sudo yum update" instead of this)

$sudo yum group install "Development Tools" => "Development tools" is a yum group which contains all pgms for compiling etc (gcc, cvs, rpm-build, etc). This installs all of those in 1 cmd.

 

Running "which gcc" shows that gcc is in path /usr/bin/gcc (binary file). Compiler "cc" used to be the default compiler in past, so usually there is soft link in /usr/bin/cc pointing to gcc, so that cc can be run as well.

We will explore gcc in more detail as we learn C and C++. Here are the basics with the help of a C/C++ pgm. C pgm need gcc to compile, while C++ require g++ compiler.

Compiling C pgm using gcc:

C pgm ex: write program hello.c as below

#include <stdio.h> => this file is in /usr/include/stdio.h

int main (void) {

  printf ("Hello, world!\n"); => printf is a function that is declared in stdio.h, so stdio.h had to be included. Only the declaration of function is done in stdio.h, actual body of function "printf" is itself stored in library /usr/lib/libc.a.

  return 0;

}

#include files:
-------------
2 versions of #include preprocessor directive. Full path, partial path or just name of file can be provided. If full path is provided, then 2 versions of #include have same effect, else they differ in how they search for the file.
1. #include <file_name> => system include. used for std header files. Here compiler searches for the file in std paths. Usually it's /usr/local/include (higher precedence) and /usr/include (lower precedence). We can provide full path of file here too, however that is not a good habit, as that file may not have same path on other systems, thereby making this pgm non portable. There is -I option that can be used for non-std path, which is discussed later.
2. #include "file_name" => user include. used for user defined header files. Here compiler first searches for the include file in the dir where your current source file resides. The current source file is the file that contains the directive #include "file_name". The compiler then searches for the include file according to the search order described above in version 1.

GCC options:

To compile pgm above, type:
gcc hello.c => This compiles hello.c pgm into an executable called a.out in same dir. Running ./a.out will print "Hello, world" on screen. # directive isntructs compiler to include stdio.h file at appropriate points. That is why we don't need to explicitly compile this file.

gcc -Wall -v hello.c -o hello => -o specifies that output executable file should be named hello instead of a.out. -Wall turns on all warnings (recommended to always use it). We can turn on specific warnings by using -Wcomment, -Wformat, etc (or even more warnings by using -W in addition to -Wall) .-v shows details about various paths, options used.

Producing machine language executable is a 2 step process, when multiple files are involved. First we create an compiled object file for each source file, and then a linker program (called ld but it's invoked automatically by gcc) links all these compiled object files to produce an executable a.out. An object file contains machine code where any references to the memory addresses of functions (or variables) in other files are left undefined.This allows source files to be compiled without direct reference to each other. The linker fills in these missing addresses when it produces the executable.

steps:

1. gcc -Wall -c main.c => If we use option -c, then instead of generating executable file, object file called main.o is genberated. Here object file with same name as source file is created by default (so main.c creates an object file main.o). Similarly, we create object files for all other files. When creating object files, compiler just notes any unresolved symbols and leaves the addr "blank" for that symbol/function.

2. gcc -Wall -c other.c => generates other.o

3. gcc main.o other.o -o hello => this step calls linker ld, which links all object files to create an executable. Now, ./hello can be run. Order is important here. Files are searched from left to right, so files which have functions that are called by other files should appear last. So, if main.c has a function my_func defined in other.c, then main.o should be put before main.o.

Instead of running the 3 cmds separately, we can also run it in 1 cmd as follows:

gcc main.c hello.c hello => produces executable hello

Linking with external libraries:

A library is a collection of precompiled object files which can be linked into programs. Libraries are typically stored in special archive files with the extension‘.a’, referred to as static libraries. They are created from object files with a separate tool, the GNU archiver ar, and used by the linker to resolve references to functions at compile-time. The standard system libraries are usually found in the directories ‘/usr/local/lib’ (higher precedence) and ‘/usr/lib’ (lower precedence). On 64 bit platforms, additional lib64 dir are also searched.

C std lib: /usr/include/stdio.h and few other *.h has all std header files (which have function declaration), while /usr/lib/libc.a is the C std lib which has all the functions defined in C std. We just include the header files in C pgm. Then the std C lib is linked by default for all C pgms.

C math lib: /usr/include/math.h has all std header files (which have function declaration for math functions as sqrt), while /usr/lib/libm.a is the C math lib which has all the math functions. This lib is not linked by default, even if we include math.h in the C pgm. Compiler option -lNAME (small letter "l" (as in love) with no space b/w l and NAME) will attempt to link object files with a library file ‘libNAME .a’ in the standard library directories. So, to link math lib, we should use "-lm" (that links libm.a from std dir which is /usr/lib/). To link more lib, we'll need -lNAME for each of them. Instead of -lm, we can also provide the full path of file as /usr/lib/libm.a on cmd line of gcc.

 The list of directories for header files is often referred to as the include path and the list of directories for libraries as the library search path or link path.

 When additional libraries are installed in other directories it is necessary to extend the search paths, in order for the libraries to be found.The compiler options ‘-I’ (capital I as in India)and ‘-L’ (captal L as in Love) add new directories to the beginning of the include path and library search path respectively.

ex: gcc -Wall -I/opt/gdbm/include -L/opt/gdbm/lib dbmain.c -lgdbm (here non std gdbm pkg is installed in /opt/gdbm. gdbm.h is in /opt/gdbm/include/gdbm.h, while libgdbm.a is in /opt/gdbm/lib/gdbm.a)

There are environment variables also which can be set instead of -I and -L options above.

1. include path: var C_INCLUDE_PATH (for C header files), CPLUS_INCLUDE_PATH (for C++ header file)

2. Static Lib search path: var LIBRARY_PATH

These var can be set on cmdline, or be put in .bashrc file, so that they take affect all the time.

ex: add these in .bashrc in home dir.

C_INCLUDE_PATH=.:/opt/gdbm-1.8.3/include:/net/include:$C_INCLUDE_PATH => adds current dir (due to . in front) and other paths to C_INCLUDE_PATH if it had any.

LIBRARY_PATH=.:/opt/gdbm-1.8.3/lib:/net/lib:$LIBRARY_PATH => adds current dir and other paths to LIBRARY_PATH if it had any.

export C_INCLUDE_PATH; export LIBRARY_PATH => export cmd is needed so that these var can be seen outside of current shell by other pgms as gcc.

So far, we have been dealing with static libraries. There is concept of shared libraries explained nicely in pdf book. Dynamic linking of these shared libraries is done at run time, so executable file (a.out) is smaller in size (as a.out doesn't contain full object file of function in .a file). Instead it keeps a small table that tells it where to get it from. OS takes care of this by loading a single copy of shared lib in dram memory, and providing a pointer to that shared lib whenever a.out requests access to shared lib. Instead of .a extension, they have .so extension, and reside in same dir where .a files reside. By default, .so files will be linked instead of .a files if .so files are present. If .so files are in non std path, then we either need to provide full path to .so file on cmd line, or need to add this 3rd var also:

3. Dynamic lib search path: var LD_LIBRARY_PATH

We can force compiler to do static linking only by using option -static.

C language standards:

original C language std are called ANSI/ISO C std (called c89 and c99). Then GNU added extensions to language called as GNU std (called as gnu89 and gnu99). By default, gcc compiles GNU C pgm. That means it uses gnu C lib (glibc). However, if we want strict ANSI/ISO C pgm, we can compile with -ansi or -std c99 option.

 Preprocessor:

# statements in C. #defiine and #ifdef ... #endif are used to compile only desired sections of C code. Instead of using #define in C pgm (which will require changing C pgm), we can define it on cmd line using -DNAME (i.e for #ifdef TEST ... #endif, we can do -DTEST which is equiv to #define TEST). To define value to var, we can do -DNUM=23 (equiv to #define NUM 23), or DMSG="My Hero", etc.

 Optimization level:

different opt levels are supported by gcc. -O0 is level 0 opt, and is the default. -O1, -O2 and -O3 refers to higher levels of code opt.

 Platform specific options:

GCC produces executable code which is compatible with all the processors in the x86 family by default if it's running on x86 system —going all the way back to the 386. However, it is also possible to compile for a specific processor to obtain better performance.

gcc -march=pentium4 => produces code that is tuned for pentium4, so may not work on all x86 processors. Better to not use this option, as it provides a little speed improvement. Similarly there are options for powerpc, sparc, dec alpha processors.

gcc -m32 generates 32 bit code on 64bit AMDx86-64 systems. Not using -m32 will produce 64 bit code by default.

other options:

gcc --help

gcc --version

gcc -v test.c => verbose compilation, shows exact seq of cmds used to compile and link. Shows full dir paths used to search header files and libs.

Compiling C++ pgm using g++:

C++ pgm ex: write program hello.cc as below

#include <iostream>

int main () {

   std::cout << "Hello, world!" << std::endl; //similar to printf func of C

   return 0;

}

compile: g++ -Wall hello.cc -o hello => here we used g++ for compiling C++ pgm. We could have used gcc too as it would compile all files ending in .cc, .C, .cxx or .cpp as C++ pgm.  The onky problem that may happen when using gcc to compile C++ files, is that the appropriate C++ lib may not get linked (*.o files produced by g++ can't be linked using gcc). It' always preferable to use g++ for compiling C++ pgm, and gcc for C pgm. g++ has exactly same options as gcc.

C++ std lib: The C++ standard library ‘libstdc++’ supplied with GCC provides a wide range of generic container classes such as lists and queues, in addition to generic algorithms such as sorting.

Compiler related tools:

1. GNU archiver : called as "ar", it combines a collection of object files into a single archive file, also known as a library.

ar cmd: ar cr libfn.a hello.o bye.o =>  creates a archive from 2 simple object files. cr=create and replace

ar t libfn.a => lists all object files in archive. Here it lists hello.o and bye.o

gcc -L. main.c -lfn -o main => This lib archive libfn.a can be used like any other static lib. -L. just adds . to lib search path (assuming we generated libfn.a in current dir)

2. grpof: gnu profiler for measuring performance of pgm.

3. gcov: gnu coverage tool analyzes coverage of pgm = how many times each line of pgm is run during execution

Compiler steps: Running gcc/g++ involves these 4 steps. These are all run behind the scenes when running gcc/g++, but can be run separately too.

1. preprocessing of macros: preprocessor expands all amcros and header files.

ex: gcc hello.c > hello.i => hello.i contains source code with all macros expanded

2. assembly code generation: assembly code is then generated. It still has call to extenal functions.

ex: gcc -S hello.i => hello.s is generated which has assembly code

3. assembler: converts assembly language into machine code and generate an object file. Addr of External functions still left undefined to be filled in by linker

ex: as hello.s -o hello.o

4. Linking: Any external functions from sytem or C run time lib (crt) are linked here.

ex: ld -dynamic-linker /usr/.../.so /../crt1.o hello.o ...  => All these object files linked together (with proper addr of func called)

ex: gcc hell.o -o hello => this gcc cmd invokes linker automatically when generating an executable from object files

Examining Compiled Files:

ex: file a.out => shows details of file a.out, whether it's ELF format, 32/64 bit, which processor it was compiled for (INTEL 80386, etc), dynamic/static link, and whether it contains a symbol table.

nm a.out => this shows location of all var and func used in exectable. T against a func name indicates func is defined in object file, while U indicates undefined (may be because it's going to be dynamically linked at run time, or we need to link that file having that func with this executable)

ldd a.out => This shows list of all shared lib, that are to be linked at runtime. It shows all dynamic lib (usually libc.so, libm.so), as well as dynamic loader lib (ld-linux.so)

 --------------------

 

graywolf is fork of Timberwolf:

TimberWolf doc is here: http://opencircuitdesign.com/qflow/archive/TimberWolf.pdf

To install Timberwolf, we need to install couple other software:

A. CMake => CMake is a popular alternative to autotools. It is usedin many open-source projects including large ones such as KDE, LLVM and Blender. See CMake section for more details on cmake.

Steps for downloading cmake:

1. dwnload cmake from here: https://cmake.org/download/ (cmake-3.14.0-rc2.tar.gz)

2.  extract .gz file, and goto dir "cmake-3.14.0-rc1). Now run these 3 steps: 1. /bootstrap 2. make 3. sudo make install

B. GSL => GNU Scientific Library (libgsl): this is used in C pgms to call many scientific functions.

1. downlaod GSL from here: https://www.gnu.org/software/gsl/ 2.goto gnu ftp mirror link and download gsl-latest.tar.gz (currently latest is pointing to version 2.5)

2. extract .gz. cd to gsl-2.5 dir. Now run these 3 steps: 1. ./configure 2. make 3. make install

3. We should see libgsl.so and libgslcblas.so shared lib in /usr/local/lib dir. Also make sure env var "LD_LIBRARY_PATH" is set to ":/usr/loacal/lib/" (assuming gsl was installed in defau;t pat). echo $LD_LIBRARY_PATH => ://usr/local/lib/ . All header files for gsl lib will be in /usr/local/inlcude/gsl

Now, to test that gsl is installed correctly, we can write a simple C test program which include one of the GSL functions, and see if works. Write a pgm named gsl_test.c

#include <gsl/gsl_sf_bessel.h> //this header file dir is in /usr/local/include dir
int main (void) {
    double x = 15.0;
    double y = gsl_sf_bessel_J0 (x);
    printf ("J0(%g) = %.18e\n", x, y);
    return 0;
}

Run:  gcc gsl_test.c -lgsl -lgslcblas => This should create a.out, and running a.out should produce bessel output.

-----

 Once above 2 software are installed, we install Timberwolf as follows:

1. Download graywolf from here: https://github.com/rubund/graywolf.

2. Download zip file, "graywolf-master.zip". Unzip it, and you should have a dir "garywolf-master". cd to that dir, and read thru README.md. It has instructions for installing it. Run these steps:

!. cd graywolf-master

II. mkdir buid

III. cd build

IV. cmake .. => NOTE: cmake is used to build this, instead of traditional GNU Autotools

This runs CMakeLists.txt in dir graywolf-master. At this stage, you may get an error:

--   No package 'gsl' found
CMake Error at CMakeLists.txt:15 (MESSAGE):
  The development files for the GNU Scientific Library (libgsl) are required
  to build graywolf.

This happens, since pkg_check_modules is not able to find GSL package, even though it's installed in std location.

To fix this, modify CMakeLists.txt in dir graywolf-master. comment out line "pkg_check_modules(GSL gsl)" and replace it with "include(FindGSL)". This will allow cmake to find GSL pkg.

#pkg_check_modules(GSL gsl)

include(FindGSL)

Once done, cd to build dir, and run "cmake .." once again. This time it should run fine:

[graywolf-master/build]$ cmake ..
-- Configuring done
-- Generating done
-- Build files have been written to: /home/Ajay/Downloads/graywolf-master/build

V. make => now run make, Last few lines on screen look something like this:

[100%] Built target mc_compact
Scanning dependencies of target run
[100%] Generating show_flows
[100%] Built target run

VI. sudo make install => this will install various files of this software in /usr/local/lib and /usr/local/bin

VII. make test => This will run 6 tests, but 1 of them fails.

The following tests FAILED:
      5 - map9v3-twmc (Failed)
Errors while running CTest
make: *** [test] Error 8

VIII. CTEST_OUTPUT_ON_FAILURE=1 make test => this step is optional. running this produces more verbose o/p for test 5 to help us debug.

iX: now cd to any dir, (cd /home/) type "graywolf" on terminal, and it should run it. Add steps on running it => FIXME

 

yosys - open source synthesis tool

Yosys is oen source synthesis tool. You provide it an RTL, and it spits out optimized gate level netlist.

yosys details are on this link: http://www.clifford.at/yosys/about.html

yosys download and installation. I'll show steps for both debian based OS (as Linux Mint or Ubuntu) and Fedora based OS (as CentOS). I haven't gotten to installing Yosys on LinuxMint, so will provide instructions for it later.

A CentOS: I will show steps for yosys installation on CentOS 7.5 1804 distro.

1. download python3 => see instructions for downloading python3 on python page. Do these steps:

  •  I. First do "sudo yum install epel-release".
  •  II. Next do "sudo yum install python3.4"
  •  III. sudo curl -O https://bootstrap.pypa.io/get-pip.py
  •  IV. sudo /usr/bin/python3.4 get-pip.py

2. download tcl/tk => see instructions for downloading tcl/tk in tcl/tk page. Use manual download and install from tcl8.7a

3. download libffi => run "sudo yum install libffi libffi-devel". Run "locate libffi" This will show libffi.so lib in /usr/lib64 and docs in /usr/share/doc. ffi.h file will be in /usr/include/libffi.h

4. install readline => sudo yum install readline-devel. This creates /usr/include/readline/readline.h

5. Now download yosys from here: http://www.clifford.at/yosys/download.html. Steps below:

 A. download yosys-0.8.tar.gz. Extract it within the file windows manager using right click and choosing "extract". That will create another dir named "yosys-yosys-0.8"

 B. type "make config-gcc". This will create Makefile with gcc as the compiler. this will suffice, as gcc can compile C++ also. There is no need to install clang.

 C. type "make" => this will start compilation process using gcc (should show CONFIG := gcc from Makefile.conf). Possible Errors:

  • tcl.h not found (called in kernel/yosys.h) => tclsh not installed or found. See bullet 2 above.
  • tclsh command not found => If you see this error and tclsh is alrready installed, probably the link or path for tclsh is not correct.
  • readline/readline.h not found (called in kernel/driver.cc) => readline not installed. See bullet 4 above.
  • ffi.h not found (called in frontends/ast/dpicall.cc) => libffl not installed. See bullet 3 above.

     If no errors found, we should see 100% build for yosys, then it downloads "abc" from berkeley, does 95% build for abc binary, and then finally "Build successful" message.

7. make test => runs all tests. Needs icarus verilog or "icarus"

8. sudo make install => This is final step where it just puts binaries for yosys in /usr/local/bin/yosys. Other yosys related binaries also here.

9. Now running yosys (typing yosys or /usr/local/bin/yosys) should bring up yosys tool. However, if we get "error while loading shared libraries: libtcl8.7.so: cannot open shared object file: No such file or directory" , that means LD_LIBRARY_PATH var is not set.   If we find "libtcl8.7.so", we see it in /usr/local/lib/libtcl.8.7.so". "echo $LD_LIBRARY_PATH" shows blank. For bash shell, type "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/". Now, "echo $LD_LIBRARY_PATH" will show ":/usr/local/lib/". Type "export LD_LIBRARY_PATH". Now typing yosys brings up cmd line yosys tool. To make this change permanent, add this line in ~/.bashrc (assuming you are using bash)

  • export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/

10. Install open src s/w graphviz for graphical viewing: sudo yum install graphviz => installs graphviz-2.30*

10. All yosys cmds and scripts available on yosys webpage.

 

yosys usage:

 1. Read Yosys manual which explains everything in very good detail. http://www.clifford.at/yosys/files/yosys_manual.pdf

 

Yosys example: TO DO ---

 Setting internet server:

You can host a website on any of the computers called as servers. The hosting services are provided for a yearly fee by a lot of hosting companies as godaddy.com, ipage.com, hostinger.com to name a few. If you do not want to pay anyone money, and want to use your own computer for hosting a website, it's pretty easy too.

Before hosting a website on your personal computer in your home, you will need to configure the router. Then you need to install a server software as "apache", and then get a name assigned to your website. We will talk about installing apache in this section. In next section, we'll talk about all the other steps.

Here, I'll talk about hosting a website on a laptop running Linux OS "Linux Mint 18". These steps should work on latest Ubuntu release too.

Install apache:

First, we need to install software, that will make our regular laptop dual work as a server. This software is called apache (or httpd process, d stands for daemon or process). Here's a good link explaining the steps (steps are the same on any debian based system, as Ubuntu, Linux Mint, etc):

UBUNTU: https://www.digitalocean.com/community/tutorials/how-to-install-the-apache-web-server-on-ubuntu-16-04

LINUX MINT: https://www.computerbeginnersguides.com/blog/2017/07/25/install-and-configure-apache-web-server-on-linux-mint-18-2/

First, install apache. We install apache2, since it's the latest apache.

  • sudo apt-get update => we could also use more modern cmd "apt" instead of "apt-get", i.e: "sudo apt update"
  • sudo apt-get install apache2 => could also run "sudo apt install apache2". It will ask for password, and then intsall Apache,utilities, configuration files, etc.

Once installed, run cmds as below:

which apache2 => should show /usr/sbin/apache2 as the path

Debian/Ubuntu start Apache automatically after installation, so we do not need to start apache separately. Now, goto web browser, and type "localhost" or "127.0.0.1" in the address tab. localhost resolves to the IP address 127.0.0.1, which is the most commonly used IPv4 loopback address. This address is used to establish an IP connection to the same machine or computer being used by the end-user. So, typing this makes computer look for httpd process running on same machine and fetch default website page from there.It should show "Apache2 Ubuntu Default page" with important dir info. This page is located at /var/www/html/index.html.

apache2 dir:

/usr/sbin/apache2 => apache2 is the executable. Normally running this executable directly should start the apache2 pgm, but due to use of env var, in it's default configuration, apache2 can't be started/stopped directly by running the binary. Cmds to start/stop it are discussed later.

/usr/share/doc/apache2/README.Debian.gz => apache2 documentation


/etc/apache2/ => this dir has many important configuration files

  • envvars => contains default environent variables for apache2ctl scrpt
  • apache2.conf => apache2.conf is the main configuration file. It puts the pieces together by including all remaining configuration files when starting up the web server. It includes following file:
    • ports.conf: determines listening ports for incoming connections which can be customized anytime. By default, port 80 is assigned to http, and port 443 to https (secure http).
  • Configuration files in the mods-enabled/, conf-enabled/ and sites-enabled/ directories contain particular configuration snippets which manage modules, global configuration fragments, or virtual host configurations respectively.
    • mods_enabled => This dir contains many *.load and *.conf file for each module (i.e alias.conf and alias.load for alias module, and so on). *.load file just loads the module *.so from appr dir (i.e LoadModule dir_module /usr/lib/apache2/modules/mod_dir.so). *.conf sets config for modules, and is written in xml format.
    • conf_enabled => few more *.conf files for customizing
    • sites_enabled => There is usually just 1 file here, 000-default.conf. This is very important file, if you plan to host more than 1 virtual host on same ip address. In that case, for each virtual host, we need to specify "ServerName", "ServerAdmin", "DocumentRoot", and log file locations.
  • Configuration files that we saw above are links from dir mods-available, conf-available and sites-available. Whenever we put a soft link from *-available dir to corresponding *-enabled dir, those files get activated. For Linux Mint, these files are enabled/disabled by using  helper scripts: a2enmod/a2dismod, a2ensite/a2dissite and a2enconf/a2disconf.

 /var/log/apache2 => stores log files for all requests apache processes.

apache2 cmds:

Now, depending of which linux distro we have, we have bunch of cmds that are used to monitor/control apache server. Due to use of env var, calling binary apache2 directly to start/stop apache2 doesn't work.

This is a good link showing diff ways to do that:

https://www.cyberciti.biz/faq/ubuntu-linux-start-restart-stop-apache-web-server/

Most modern Linux OS use systemd linux cmds to check status of pgm. On earlier versions of linux, init process was started as 1st process, but RedHat introduced systemd to replace init. All derivatives of red hat, and other linux distro (latest versions) use systemd. "systemctl" cmd is used for systemd based systems. Ubuntu uses systemd starting from "Ubuntu 16.04 LTS"  but earlier version of Ubuntu still uses init.  For more details, see in "init vs systemd" link in "linux intro" section.

Init for apache2:

On Linux Mint 19, I see apache2 files getting installed in /etc/init.d. Basically installation of Apache essentially runs this cmd during some point of installation:

sudo update-rc.d apache2 defaults => This creates the appropriate symlinks in the /etc/rc*.d/ folders. Ubuntu uses scripts in the /etc/init.d/ folder to start/stop services.For apache2 script, we see that they have links starting with S in rc2.d, rc3.d,rc4.d and rc5.d, but have links starting with K in rc0.d, rc1.d and no link for rcS.d. Since runlevel for ubuntu is 2, all links in rc2.d that start with S get started, so apache2 gets started by default anytime computer boots up (since it's link starts with S).

If we want to disable apache2 on startup, we can run this cmd: sudo update-rc.d apache2 disable which removes all the "S" symlinks and replaces them with "K" symlinks

to re-enable Apache: sudo update-rc.d apache2 enable

ex: /etc/init.d/apache2 status => Here, apache2 process init script is called to check status of apache2

systemd for apache2:

We'll use "service" or "apache2ctl" cmds, as they work on all linux distro. apache2ctl is apache server ctl i/f cmd, so it ships with apache, and will work anywhere apache is installed. However, on linux mint 19, apach2ctl is not installed by default, when installing apache2, so we'll stick with service cmd. Currently, apach2ctl uses /etc/init.d/apache2 script, but in future, it may use native systemd unit file.

Find if apache2 is running: (similarly we can use start/stop to start/stop apache server. However start./stop requires root privileges, so need to use sudo before each cmd)

service apache2 status => displays "* apache2 is active (running)". shows detailed o/p on linux mint

/etc/init.d/apache2 status => same o/p as above

systemctl status apache2.service => same o/p as above, for systemd based systems

Customizing website:

We saw above that index.html file is the one fetched to display the webpage for your website. It's located at /var/www/html/index.html. It has default ubuntu info. We can move this file to a backup file (index.back.html), and start from scratch with a new index.html file, and put very simple html code in it.