Dienstag, 27. November 2018

Create CMake project with package information and install targets

Introduction

This information here is in no way complete. I'm writing this down as I try to figure out how CMake works and hopefully this information is useful for others also.

There is a example project which you can find here: cmake_project_example

The example project consists of a library and and an application which uses the library. The CMake file for the library will define components for installation and will also write package configuration files which can be used with find_package.

All descriptions in this post are referring to the libexample/CMakelists.txt file

Installing Header Files

 There are two ways how to install header files. When all public header files are included in a directory it is possible to use the install command to copy the header files:
 install(  
   DIRECTORY include/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}  
   )  
This command will copy all files from the include directory to the destination directory.

It is also possible to specify each public header file individually and then use also the install command to copy the header files to the specified directory:
 set_target_properties(${PROJECT_NAME} PROPERTIES  
 ...  
   PUBLIC_HEADER include/${PROJECT_NAME}/example.h  
   )  
 ...  
 install(TARGETS ${PROJECT_NAME}  
 ...  
   PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME} COMPONENT Development  
   )  

Depending on the project structure one of the two solutions could be better suited (Why add header files into ADD_LIBRARY/ADD_EXECUTABLE command in CMake).

Add Package Configuration Files

Package configuration files allow other CMake projects to find the library which makes it possible to include the header files and link to the library. CMake will search in some predefined paths for the package configuration file (Search Procedure).

A target is specified in the install command:
 install(TARGETS ${PROJECT_NAME}  
   EXPORT ${PROJECT_NAME}Targets 
   INCLUDES DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} COMPONENT Development 

The INCLUDES_DESTINATION does not install any header files but it will determine the include path when using the package. In this example the header files will be installed in the directory ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME} (e. .g /usr/local/include/example/) but to use the header files the include directive has to be
 #include <example/example.h>  

Then a target file for the project will be generated:
 install(EXPORT ${PROJECT_NAME}Targets  
  FILE  
   ${PROJECT_NAME}Targets.cmake  
  NAMESPACE  
   ${PROJECT_NAME}::  
  DESTINATION  
   ${ConfigPackageLocation}  
  COMPONENT  
   Development  
 )  

The target file is then added to the Config.cmake.in file:
 include("${CMAKE_CURRENT_LIST_DIR}/@PROJECT_NAME@Targets.cmake")  

and finally the configuration files for the package will be generated:
 include(CMakePackageConfigHelpers)  
 ...  
 configure_package_config_file(  
   ${PROJECT_NAME}Config.cmake.in  
   ${PROJECT_NAME}Config.cmake  
   INSTALL_DESTINATION "${ConfigPackageLocation}"  
   PATH_VARS CMAKE_INSTALL_PREFIX  
   )  
 write_basic_package_version_file(  
   ${PROJECT_NAME}ConfigVersion.cmake  
   VERSION ${EXAMPLE_VERSION_STRING}  
   COMPATIBILITY AnyNewerVersion  
   )  
 install(  
  FILES  
   "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake"  
   "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake"  
  DESTINATION  
   ${ConfigPackageLocation}  
  COMPONENT  
   Development  
 )  

 Installing Components

The install command lets you define COMPONENTS. For example this makes it possible to only install the library or to only install the development files. This could be helpful for packagers who want to create seperate packages.

Define a COMPONENT name in the install command:
 install(TARGETS ${PROJECT_NAME}  
   EXPORT ${PROJECT_NAME}Targets  
   ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT Library  
   LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT Library  
   RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT Library # This is for Windows  
   INCLUDES DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} COMPONENT Development  
   PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME} COMPONENT Development  
   )  
 install(EXPORT ${PROJECT_NAME}Targets  
  FILE  
   ${PROJECT_NAME}Targets.cmake  
  NAMESPACE  
   ${PROJECT_NAME}::  
  DESTINATION  
   ${ConfigPackageLocation}  
  COMPONENT  
   Development  
 )  

If you want to install a specific component you can then use the following command:
 $ DESTDIR="$(pwd)/install" cmake -DCOMPONENT=Development -P cmake_install.cmake  

Using Library Without Installing

It is possible to use the library in another project without installing it. You have to append the following lines to libexample/CMakelists.txt:
 export (PACKAGE ${PROJECT_NAME})  

This command will create a configuration file in the users home directory.

Linking To The Library From Another Project

Then it is possible to link to the library from another project:
 target_link_libraries(<project name> <namespace>::<library project name>)   
e. g. :
 target_link_libraries(myProject example::example)  

Donnerstag, 25. Oktober 2018

Use PAM to mount network share in Ubuntu 18.04

In this example a user mount script will be created which will mount a Windows network share.

At first make sure you have the cifs package installed on your system:

bash$ sudo apt-get install cifs-utils

Then it is necessary to change the following lines in the PAM configuration file (underlined parts were changed).
/etc/security/pam_mount.conf.xml:
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
<!--
        See pam_mount.conf(5) for a description.
-->

<pam_mount>

                <!-- debug should come before everything else,
                since this file is still processed in a single pass
                from top-to-bottom -->

<debug enable="0" />

                <!-- Volume definitions -->


                <!-- pam_mount parameters: General tunables -->

<luserconf name=".pam_mount.conf.xml" />

<!-- Note that commenting out mntoptions will give you the defaults.
     You will need to explicitly initialize it with the empty string
     to reset the defaults to nothing. -->
<mntoptions allow="nosuid,nodev,loop,encryption,fsck,nonempty,allow_root,allow_other,credentials" />
<!--
<mntoptions deny="suid,dev" />
<mntoptions allow="*" />
<mntoptions deny="*" />
-->
<mntoptions require="nosuid,nodev" />

<!-- requires ofl from hxtools to be present -->
<logout wait="0" hup="no" term="no" kill="no" />


                <!-- pam_mount parameters: Volume-related -->

<mkmountpoint enable="1" remove="true" />


</pam_mount>
In luserconf the name of user PAM configuration file is specified. It is also necessary to add credentials as allowed mount option, so that it is possible to specify a file with user credentials.

The file in the user directory looks like this:

~/.pam_mount.conf.xml:
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">

<pam_mount>
        <volume
        options="nodev,nosuid,credentials=/home/%(USER)/.smb.cred"
        user="*"
        mountpoint="<path to mount point>"
        path="<path to share>"
        server="<server name>"
        fstype="cifs" />
</pam_mount>

In the credentials file the user name, password and domain are set:

~/.smb.cred:
username=<username>
password=<password>
domain=<domain>

The obvious disadvantage of this method is, that the password will be stored in clear text on the system, unless the users home directory is encrypted. The password can be omitted though if it is the same for the user login and for mounting the network share. Probably this is also true for the user name.

The next step restricts the file access rights to a bare minimum, so that only the current user can read an write the files:

bash$ chmod 600 ~/.pam_mount.conf.xml ~/.smb.cred

If a problem occurs and the network share is not mounted debugging can be enabled in the PAM configuration file.

/etc/security/pam_mount.conf.xml:
<debug enable="1" />

The debug messages can then be found in /var/log/auth.log.

Freitag, 14. September 2018

Mount Windows network shares with Dolphin and Nautilus in Ubuntu

Make sure you have installed the cifs-utils and smbclient packages:
# apt-get install cifs-utils smbclient
After you have installed the packages try to access the Windows share with the smbclient tool:
$ smbclient -U <user name> -L //<server name>  -W <domain name>
 
If you get the following error message: protocol negotiation failed: NT_STATUS_INVALID_NETWORK_RESPONSE
try to specify a protocol version:
$ smbclient -U <user name> -L //<server name>  -W <domain name> -m <protocol version, e. g. SMB2>
In this case you need to create a configuration file with the following content:
~/.smb/smb.conf
[global]                                                                                                                                                                          
  workgroup = <domain name>                                                                                                                                                  
  client max protocol = <protocol version>

It should now be possible to access the shares with Nautilus and Dolphin. Use the following format to mount the network share:
  • Nautilus: smb://<domain name>,<user name>@<server name>/<share name>
  • Dolphin: smb://<domain name>\<user name>@<server name>/<share name>
 Another option would be to mount the network share:
$ sudo mount --types cifs --options uid=${USER},gid=100,user=<windows user name>,domain=<domain name> //<server name>/<share name> <mount path>
The directory where the share should be mounted must exist.

Dienstag, 14. November 2017

Windows firewall rule for Open DHCP Server

The Windows firewall will normally block requests to the DHCP server, so an exception has to be added to the firewall fules.
Open Windows Defender Firewall with Advanced Security and go to Inbound Rules.
In General tab make sure the rule is enabled and Action is Allow the connection. In Programs and Services select the Open DHCP Server executable. In Protocols and Ports select UDP as Protocol type and enter 67, 68, 547, 546 as Specific Ports.  In the Advanced tab make sure the rule is selected for the current profile.

Montag, 6. November 2017

This article describes how to setup an Eclipse project so that it is possible to debug a program running on the NXHX51-ETM eval board from Hilscher. Hopefully the procedure should be similar for other processors/eval boards.
This tutorial expects that you have setup your project in eclipse and that it compiles using the GCC cross compiler.

Prerequisites

 The following software is needed to be able to debug:

gdb Standalone

 You can try to debug the program on the command line first to check if everything is woking fine.

Find Program Entry Point

The program entry point has to be determined so that the program counter can be set to the correct position before running the program.
The entry point can be determined with the readelf tool:
"<Path to netx Studio GCC-installation>\arm-none-eabi-readelf.exe" -l "<ELF-file>"

Example for ELF entry point


Starting OpenOCD

 The debug server can be started with the following command:
"<path to openocd>\bin\openocd.exe" -c "gdb_port 3333" -c "adapter_khz 1000" -s "<path to netx openocd scripts>" -f interface\hilscher_nxhx_onboard.cfg -f board\hilscher_nxhx51.cfg -c "load_image <ELF-file> 0x0 elf" -c "puts gdb-server-ready"
It ist importand to replace the "\" with "/" in the path of the elf-file for the load_image command.
The path to the script OpenOCD script files would be:
%ProgramData%\Hilscher GmbH\netX Studio CDT\BuildTools\openocd

Starting gdb

The debugger can then be started with the following command:
"<path to netx Studio GCC-installation>\arm-none-eabi-gdb.exe" --eval-command="target remote localhost:3333" "<ELF-file>"
If you use the Hilscher compiler the GCC installation is in the following directory:
%ProgramData%\Hilscher GmbH\netX Studio CDT\BuildTools\arm-none-eabi-gcc\4.5.2\bin

Debugging Using Eclipse

Before setting up Eclipse make sure you have installed the GNU MCU Eclipse plugin.

Setup Eclipse

 Create a new debug configuration in Eclipse. Go to Run -> Debug Configurations... and create a new GDB Hardware Debugging configuration.
 Select the ELF-file which should be debugged and select Disable auto build if needed.
 In the Debugger dialog select the path to the GCC installation. Select Use remote target and select GNU ARM OpenOCD as JTAG Device. Select the Port number which was given as command line argument to OpenOCD.
Set the program counter to the entry point which was determined previously in the Startup dialog and store the settings with Apply.

Start Debug Session

First start OpenOCD:
"<path to openocd>\bin\openocd.exe" -c "gdb_port 3333" -c "adapter_khz 1000" -s "<path to netx openocd scripts>" -f interface\hilscher_nxhx_onboard.cfg -f board\hilscher_nxhx51.cfg -c "load_image <ELF-file> 0x0 elf"
Then start the previously configured debug session.

Example Scripts to Run Program Directly on Target

environment.cmd
run_openocd.cmd

Freitag, 25. August 2017

The following code snipped shows the cases in which the different constructors and assignment operators will be used by the compiler: 

#include <iostream>
#include <vector>

using namespace std;

template<typename T>
class MyClass
{
public:
  MyClass(vector<T> inputVector);
  MyClass(const MyClass<T> &other);
  MyClass(MyClass &&other) noexcept;
  MyClass &operator=(const MyClass &other);
  MyClass &operator=(MyClass &&other);

  virtual ~MyClass();

private:
  vector<T> mVector;
};

template<typename T>
MyClass<T>::MyClass(vector<T> inputVector)
  : mVector{inputVector}
{
  cout << "\tMyClass constructor" << endl;
}

template<typename T>
MyClass<T>::MyClass(const MyClass<T> &other)
  : mVector{other.mVector}
{
  cout << "\tMyClass copy constructor" << endl;
}

template<typename T>
MyClass<T>::MyClass(MyClass<T> &&other) noexcept
  : mVector{std::move(other.mVector)}
{
  cout << "\tMyClass move constructor" << endl;
}

template<typename T>
MyClass<T>& MyClass<T>::operator=(const MyClass<T> &other)
{
  cout << "\tMyClass assignment operator" << endl;
  this->mVector = other.mVector;
  return *this;
}

template<typename T>
MyClass<T> &MyClass<T>::operator=(MyClass<T> &&other)
{
  cout << "\tMyClass move assignment operator" << endl;
  this->mVector.operator=( std::move(other.mVector) );
  return *this;
}


template<typename T>
MyClass<T>::~MyClass()
{
  cout << "\tMyClass vector size: " << mVector.size() << endl;
}


MyClass<int> GetClass()
{
  MyClass<int> obj{ {1, 2} };
  return obj;
}

MyClass<int> GetClass2()
{
  return MyClass<int>{ {1, 2} };
}

int main()
{
  {
    cout << "Constructor" << endl;
    MyClass<int> myClass{ { 1, 2 } };
  }
  {
    cout << "Move constructor" << endl;
    MyClass<int> myClass{GetClass()};
  }
  {
    cout << "Assignment operator" << endl;
    MyClass<int> origObj{ { 1, 2 } };
    MyClass<int> myClass{ { 3, 4 } };
    myClass = origObj;
  }
  {
    cout << "Move assignment operator" << endl;
    MyClass<int> myClass{ {3, 4} };
    myClass = GetClass2();
  }

  // References are handled differently:
  // A reference is basically a pointer to an object.
  // The references have to be const because the objects returned by the
  // GetClass functions are rvalues and thus the references have to be
  // rvalues also.
  {
    cout << "Reference construction (direct initialization)" << endl;
    const MyClass<int> &myClass{GetClass()};
  }
  {
    cout << "Reference construction (assignment)" << endl;
    const MyClass<int> &myClass = GetClass();
  }

  return 0;
}



This should result to the following output by the compiler:
Constructor
        MyClass constructor
        MyClass vector size: 2
Move constructor
        MyClass constructor
        MyClass move constructor
        MyClass vector size: 0
        MyClass vector size: 2
Assignment operator
        MyClass constructor
        MyClass constructor
        MyClass assignment operator
        MyClass vector size: 2
        MyClass vector size: 2
Move assignment operator
        MyClass constructor
        MyClass constructor
        MyClass move assignment operator
        MyClass vector size: 0
        MyClass vector size: 2
Reference construction (direct initialization)
        MyClass constructor
        MyClass move constructor
        MyClass vector size: 0
        MyClass vector size: 2
Reference construction (assignment)
        MyClass constructor
        MyClass move constructor
        MyClass vector size: 0
        MyClass vector size: 2

Samstag, 12. August 2017

Developing a kernel module on Archlinux with Eclipse

In this blog post I will try to write down everything on how to setup an environment for developing kernel drivers on Archlinux.

In my case I've wanted to use the current git version of the kernel to develop so I've downloaded linux-git from AUR:
$ git clone https://aur.archlinux.org/linux-git.git
and then build the kernel
$ nice -n19 makepkg

Now start Eclipse and create your project.

Go to Project -> Properties -> C/C++ General -> Preprocessor Include -> GNU C and delete all entries in CDT Managed Build Setting entries.


Select CDT User Settings Entries and then click Add...
Select Include Directory and File System Path
Select Contains system headers and select the <path to abs package>/pkg/linux-git-headers/usr/lib/modules/<linux version>/build/include/



Repeat the above steps for the <path to abs package>/pkg/linux-git-headers/usr/lib/modules/<linux version>/build/arch/x86/include/ directory

And one more time for <path to abs package>//pkg/linux-git-headers/usr/lib/modules/<linux version>/build/arch/x86/include/.

Now click Add... again and select Preprocessor Macro File and File System Path and add <path to abs package>/pkg/linux-git-headers/usr/lib/modules/<linux version>/build/include/linux/kconfig.h

 Now go to Paths and Symbols, select #Symbols and add a symbol named __KERNEL__ and give it a value of 1.