Return to site

Cross Compile For Mac On Linux

broken image


Is there a way to cross compile for Mac OS X and iOS on Ubuntu? I found mingw-w64 packages and they work well for Windows and it seems I can cross compile for Android as well with gcc-arm-linux packages but I couldn't find an equivalent for Darwin X. I understand there might be some legal issues when dealing with Apple, but Fedora has cross compiler for Darwin X so I'm. Cross-compiling for windows (from linux) - Part 1. This article explains how to cross-compile a 64-bit windows application from linux. This means that you don't need to have a windows build-machine, and you can use all linux based development tools. This is great for developers who feel more comfortable in linux. Clang/LLVM is a cross compiler by default and is now available on nearly every Linux distribution, so we just need a proper port of the cctools/ld64 and the macOS SDK. OSXCross includes a collection of scripts for preparing the SDK and building the cctools/ld64. @IsAnton said in Cross compile Qt from Linux to Macos. And in Linux no such tool as xcodebuild, does it means that I can't build Qt from Lunux for Macos? Yes, Apple is the odd one out. You need a Mac to compile for iOS and/or MacOS.

Abstract

This cuDNN 8.0.4 Installation Guide provides step-by-step instructions on how to install and check for correct operation of cuDNN on Linux and Microsoft Windows systems.

For previously released cuDNN installation documentation, see cuDNN Archives.

1. Overview

The NVIDIA® CUDA® Deep Neural Network library™ (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA® Deep Learning SDK.

Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning. cuDNN accelerates widely used deep learning frameworks and is freely available to members of the NVIDIA Developer Program™.

2. Installing cuDNN On Linux

2.1. Prerequisites

Ensure you meet the following requirements before you install cuDNN.
  • For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix.

2.1.1. Installing NVIDIA Graphics Drivers

Install up-to-date NVIDIA graphics drivers on your Linux system.

Procedure

  1. Go to: NVIDIA download drivers
  2. Select the GPU and OS version from the drop-down menus.
  3. Download and install the NVIDIA graphics driver as indicated on that web page. For more information, select the ADDITIONAL INFORMATION tab for step-by-step instructions for installing a driver.
  4. Restart your system to ensure the graphics driver takes effect.

2.1.2. Installing The CUDA Toolkit For Linux

Cross Compile For Mac On Linux

Refer to the following instructions for installing CUDA on Linux, including the CUDA driver and toolkit: NVIDIA CUDA Installation Guide for Linux.

2.2. Downloading cuDNN For Linux

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

Procedure

  1. Go to: NVIDIA cuDNN home page.
  2. Click Download.
  3. Complete the short survey and click Submit.
  4. Accept the Terms and Conditions. A list of available download versions of cuDNN displays.
  5. Select the cuDNN version you want to install. A list of available resources displays.

2.3. Installing cuDNN On Linux

The following steps describe how to build a cuDNN dependent program. Choose the installation method that meets your environment needs. For example, the tar file installation applies to all Linux platforms, and the Debian installation package applies to Ubuntu 16.04 and 18.04.

In the following sections:
  • your CUDA directory path is referred to as /usr/local/cuda/
  • your cuDNN download path is referred to as

2.3.1. Installing From A Tar File

Before issuing the following commands, you'll need to replace x.x and v8.x.x.x with your specific CUDA version and cuDNN version and package date.
  1. Navigate to your directory containing the cuDNN Tar file.
  2. Unzip the cuDNN package.

    or

  3. Copy the following files into the CUDA Toolkit directory, and change the file permissions.

2.3.2. Installing From A Debian File

Before issuing the following commands, you'll need to replace x.x and 8.x.x.x with your specific CUDA version and cuDNN version and package date.

Procedure

  1. Navigate to your directory containing the cuDNN Debian file.
  2. Install the runtime library, for example:

    or

  3. Install the developer library, for example:

    or

  4. Install the code samples and the cuDNN library documentation, for example:

    or

2.3.3. Installing From An RPM File

Procedure

  1. Download the rpm package libcudnn*.rpm to the local path.
  2. Install the rpm package from the local path. This will install the cuDNN libraries.

    or

2.4. Verifying The cuDNN Install On Linux

To verify that cuDNN is installed and is running properly, compile the mnistCUDNN sample located in the /usr/src/cudnn_samples_v8 directory in the Debian file.

Procedure

  1. Copy the cuDNN sample to a writable path.
  2. Go to the writable path.
  3. Compile the mnistCUDNN sample.
  4. Run the mnistCUDNN sample.
    If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following:

2.5. Upgrading From v7 To v8

Since version 8 can coexist with previous versions of cuDNN, if the user has an older version of cuDNN such as v6 or v7, installing version 8 will not automatically delete an older revision. Therefore, if the user wants the latest version, install cuDNN version 8 by following the installation steps.
To upgrade from v7 to v8 for RHEL, run:

To switch between v7 and v8 installations, issue sudo update-alternatives --config libcudnn and choose the appropriate cuDNN version.

2.6. Troubleshooting

Join the NVIDIA Developer Forum to post questions and follow discussions.

3. Installing cuDNN On Windows

3.1. Prerequisites

Ensure you meet the following requirements before you install cuDNN.
  • For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix.

3.1.1. Installing NVIDIA Graphic Drivers

Install up-to-date NVIDIA graphics drivers on your Windows system.
  1. Go to: NVIDIA download drivers
  2. Select the GPU and OS version from the drop-down menus.
  3. Download and install the NVIDIA driver as indicated on that web page. For more information, select the ADDITIONAL INFORMATION tab for step-by-step instructions for installing a driver.
  4. Restart your system to ensure the graphics driver takes effect.

3.1.2. Installing The CUDA Toolkit For Windows

Refer to the following instructions for installing CUDA on Windows, including the CUDA driver and toolkit: NVIDIA CUDA Installation Guide for Windows.

3.2. Downloading cuDNN For Windows

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

Procedure

  1. Go to: NVIDIA cuDNN home page.
  2. Click Download.
  3. Complete the short survey and click Submit.
  4. Accept the Terms and Conditions. A list of available download versions of cuDNN displays.
  5. Select the cuDNN version to want to install. A list of available resources displays.
  6. Extract the cuDNN archive to a directory of your choice.

3.3. Installing cuDNN On Windows

The following steps describe how to build a cuDNN dependent program.

Before issuing the following commands, you'll need to replace x.x and 8.x.x.x with your specific CUDA version and cuDNN version and package date.

In the following sections the CUDA v9.0 is used as example:
  • Your CUDA directory path is referred to as C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.x
  • Your cuDNN directory path is referred to as
  1. Navigate to your directory containing cuDNN.
  2. Unzip the cuDNN package. or
  3. Copy the following files into the CUDA Toolkit directory.
    1. Copy cudabincudnn*.dll to C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.xbin.
    2. Copy cudaincludecudnn*.h to C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.xinclude.
    3. Copy cudalibx64cudnn*.lib to C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.xlibx64.
  4. Set the following environment variables to point to where cuDNN is located. To access the value of the $(CUDA_PATH) environment variable, perform the following steps:
    1. Open a command prompt from the Start menu.
    2. Type Run and hit Enter.
    3. Issue the control sysdm.cpl command.
    4. Select the Advanced tab at the top of the window.
    5. Click Environment Variables at the bottom of the window.
    6. Ensure the following values are set:
  5. Include cudnn.lib in your Visual Studio project.
    1. Open the Visual Studio project and right-click on the project name.
    2. Click Linker > Input > Additional Dependencies.
    3. Add cudnn.lib and click OK.

3.4. Upgrading From v7 To v8

Navigate to your directory containing cuDNN and delete the old cuDNNlib and header files. Reinstall the latest cuDNN version by following the steps in Installing cuDNN On Windows.

3.5. Troubleshooting

Join the NVIDIA Developer Forum to post questions and follow discussions.

4. Cross-compiling cuDNN Samples

This section describes how to cross-compile cuDNN samples.

4.1. NVIDIA DRIVE OS Linux

Follow the below steps to cross-compile samples on NVIDIA DRIVE OS Linux.

4.1.1. Installing The For DRIVE OS

Before issuing the following commands, you'll need to replace x-x with your specific version.

  1. Download the for Ubuntu package:cuda*ubuntu*_amd64.deb
  2. Download the cross compile package: cuda*-cross-aarch64*_all.deb
  3. Execute the following commands:

4.1.2. Installing For DRIVE OS

  1. Download the Ubuntu package for your preferred version: *libcudnn8-cross-aarch64_*.deb
  2. Download the cross compile package: libcudnn8-dev-cross-aarch64_*.deb
  3. Execute the following commands:

4.1.3. Cross-compiling Samples For DRIVE OS

Copy the cudnn_samples_v8 directory to your home directory:

4.2. QNX

Follow the below steps to cross-compile cuDNN samples on QNX:

4.2.1. Installing The For QNX

Before issuing the following commands, you'll need to replace x-x with your specific version.

  1. Download the for Ubuntu package:cuda*ubuntu*_amd64.deb
  2. Download the cross compile package: cuda*-cross-aarch64*_all.deb
  3. Execute the following commands:

4.2.2. Installing For QNX

  1. Download the Ubuntu package for your preferred version: *libcudnn8-cross-aarch64_*.deb
  2. Download the cross compile package: libcudnn8-devel-cross-aarch64_*.deb
  3. Execute the following commands:

4.2.3. Set The Environment Variables

To set the environment variables, issue the following commands:

4.2.4. Cross-compiling Samples For QNX

Copy the cudnn_samples_v8 directory to your home directory:

Before issuing the following commands, you'll need to replace 8.x.x with your specific version.

Notice

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation ('NVIDIA') makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Cross Compile For Mac On Linux

Refer to the following instructions for installing CUDA on Linux, including the CUDA driver and toolkit: NVIDIA CUDA Installation Guide for Linux.

2.2. Downloading cuDNN For Linux

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

Procedure

  1. Go to: NVIDIA cuDNN home page.
  2. Click Download.
  3. Complete the short survey and click Submit.
  4. Accept the Terms and Conditions. A list of available download versions of cuDNN displays.
  5. Select the cuDNN version you want to install. A list of available resources displays.

2.3. Installing cuDNN On Linux

The following steps describe how to build a cuDNN dependent program. Choose the installation method that meets your environment needs. For example, the tar file installation applies to all Linux platforms, and the Debian installation package applies to Ubuntu 16.04 and 18.04.

In the following sections:
  • your CUDA directory path is referred to as /usr/local/cuda/
  • your cuDNN download path is referred to as

2.3.1. Installing From A Tar File

Before issuing the following commands, you'll need to replace x.x and v8.x.x.x with your specific CUDA version and cuDNN version and package date.
  1. Navigate to your directory containing the cuDNN Tar file.
  2. Unzip the cuDNN package.

    or

  3. Copy the following files into the CUDA Toolkit directory, and change the file permissions.

2.3.2. Installing From A Debian File

Before issuing the following commands, you'll need to replace x.x and 8.x.x.x with your specific CUDA version and cuDNN version and package date.

Procedure

  1. Navigate to your directory containing the cuDNN Debian file.
  2. Install the runtime library, for example:

    or

  3. Install the developer library, for example:

    or

  4. Install the code samples and the cuDNN library documentation, for example:

    or

2.3.3. Installing From An RPM File

Procedure

  1. Download the rpm package libcudnn*.rpm to the local path.
  2. Install the rpm package from the local path. This will install the cuDNN libraries.

    or

2.4. Verifying The cuDNN Install On Linux

To verify that cuDNN is installed and is running properly, compile the mnistCUDNN sample located in the /usr/src/cudnn_samples_v8 directory in the Debian file.

Procedure

  1. Copy the cuDNN sample to a writable path.
  2. Go to the writable path.
  3. Compile the mnistCUDNN sample.
  4. Run the mnistCUDNN sample.
    If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following:

2.5. Upgrading From v7 To v8

Since version 8 can coexist with previous versions of cuDNN, if the user has an older version of cuDNN such as v6 or v7, installing version 8 will not automatically delete an older revision. Therefore, if the user wants the latest version, install cuDNN version 8 by following the installation steps.
To upgrade from v7 to v8 for RHEL, run:

To switch between v7 and v8 installations, issue sudo update-alternatives --config libcudnn and choose the appropriate cuDNN version.

2.6. Troubleshooting

Join the NVIDIA Developer Forum to post questions and follow discussions.

3. Installing cuDNN On Windows

3.1. Prerequisites

Ensure you meet the following requirements before you install cuDNN.
  • For the latest compatibility software versions of the OS, CUDA, the CUDA driver, and the NVIDIA hardware, see the cuDNN Support Matrix.

3.1.1. Installing NVIDIA Graphic Drivers

Install up-to-date NVIDIA graphics drivers on your Windows system.
  1. Go to: NVIDIA download drivers
  2. Select the GPU and OS version from the drop-down menus.
  3. Download and install the NVIDIA driver as indicated on that web page. For more information, select the ADDITIONAL INFORMATION tab for step-by-step instructions for installing a driver.
  4. Restart your system to ensure the graphics driver takes effect.

3.1.2. Installing The CUDA Toolkit For Windows

Refer to the following instructions for installing CUDA on Windows, including the CUDA driver and toolkit: NVIDIA CUDA Installation Guide for Windows.

3.2. Downloading cuDNN For Windows

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

Procedure

  1. Go to: NVIDIA cuDNN home page.
  2. Click Download.
  3. Complete the short survey and click Submit.
  4. Accept the Terms and Conditions. A list of available download versions of cuDNN displays.
  5. Select the cuDNN version to want to install. A list of available resources displays.
  6. Extract the cuDNN archive to a directory of your choice.

3.3. Installing cuDNN On Windows

The following steps describe how to build a cuDNN dependent program.

Before issuing the following commands, you'll need to replace x.x and 8.x.x.x with your specific CUDA version and cuDNN version and package date.

In the following sections the CUDA v9.0 is used as example:
  • Your CUDA directory path is referred to as C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.x
  • Your cuDNN directory path is referred to as
  1. Navigate to your directory containing cuDNN.
  2. Unzip the cuDNN package. or
  3. Copy the following files into the CUDA Toolkit directory.
    1. Copy cudabincudnn*.dll to C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.xbin.
    2. Copy cudaincludecudnn*.h to C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.xinclude.
    3. Copy cudalibx64cudnn*.lib to C:Program FilesNVIDIA GPU Computing ToolkitCUDAvx.xlibx64.
  4. Set the following environment variables to point to where cuDNN is located. To access the value of the $(CUDA_PATH) environment variable, perform the following steps:
    1. Open a command prompt from the Start menu.
    2. Type Run and hit Enter.
    3. Issue the control sysdm.cpl command.
    4. Select the Advanced tab at the top of the window.
    5. Click Environment Variables at the bottom of the window.
    6. Ensure the following values are set:
  5. Include cudnn.lib in your Visual Studio project.
    1. Open the Visual Studio project and right-click on the project name.
    2. Click Linker > Input > Additional Dependencies.
    3. Add cudnn.lib and click OK.

3.4. Upgrading From v7 To v8

Navigate to your directory containing cuDNN and delete the old cuDNNlib and header files. Reinstall the latest cuDNN version by following the steps in Installing cuDNN On Windows.

3.5. Troubleshooting

Join the NVIDIA Developer Forum to post questions and follow discussions.

4. Cross-compiling cuDNN Samples

This section describes how to cross-compile cuDNN samples.

4.1. NVIDIA DRIVE OS Linux

Follow the below steps to cross-compile samples on NVIDIA DRIVE OS Linux.

4.1.1. Installing The For DRIVE OS

Before issuing the following commands, you'll need to replace x-x with your specific version.

  1. Download the for Ubuntu package:cuda*ubuntu*_amd64.deb
  2. Download the cross compile package: cuda*-cross-aarch64*_all.deb
  3. Execute the following commands:

4.1.2. Installing For DRIVE OS

  1. Download the Ubuntu package for your preferred version: *libcudnn8-cross-aarch64_*.deb
  2. Download the cross compile package: libcudnn8-dev-cross-aarch64_*.deb
  3. Execute the following commands:

4.1.3. Cross-compiling Samples For DRIVE OS

Copy the cudnn_samples_v8 directory to your home directory:

4.2. QNX

Follow the below steps to cross-compile cuDNN samples on QNX:

4.2.1. Installing The For QNX

Before issuing the following commands, you'll need to replace x-x with your specific version.

  1. Download the for Ubuntu package:cuda*ubuntu*_amd64.deb
  2. Download the cross compile package: cuda*-cross-aarch64*_all.deb
  3. Execute the following commands:

4.2.2. Installing For QNX

  1. Download the Ubuntu package for your preferred version: *libcudnn8-cross-aarch64_*.deb
  2. Download the cross compile package: libcudnn8-devel-cross-aarch64_*.deb
  3. Execute the following commands:

4.2.3. Set The Environment Variables

To set the environment variables, issue the following commands:

4.2.4. Cross-compiling Samples For QNX

Copy the cudnn_samples_v8 directory to your home directory:

Before issuing the following commands, you'll need to replace 8.x.x with your specific version.

Notice

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation ('NVIDIA') makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer ('Terms of Sale'). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer's own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer's sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer's product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.

THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, 'MATERIALS') ARE BEING PROVIDED 'AS IS.' NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA's aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

to understand usage via Google Analytics. Mac cover for pc windows 10. to limit how many times you see an ad. Saying no will not stop you from seeing Etsy ads, but it may make them less relevant or more repetitive. to understand how you got to Etsy. to ensure that sellers understand their audience and can provide relevant adsWe do this with social media, marketing, and analytics partners (who may have their own information they've collected).

VESA DisplayPort

DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are trademarks owned by the Video Electronics Standards Association in the United States and other countries.

HDMI

HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.

ARM

ARM, AMBA and ARM Powered are registered trademarks of ARM Limited. Cortex, MPCore and Mali are trademarks of ARM Limited. All other brands or product names are the property of their respective holders. 'ARM' is used to represent ARM Holdings plc; its operating company ARM Limited; and the regional subsidiaries ARM Inc.; ARM KK; ARM Korea Limited.; ARM Taiwan Limited; ARM France SAS; ARM Consulting (Shanghai) Co. Ltd.; ARM Germany GmbH; ARM Embedded Technologies Pvt. Ltd.; ARM Norway, AS and ARM Sweden AB.

OpenCL

OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.

Trademarks

NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, CUDA Toolkit, cuDNN, DALI, DIGITS, DGX, DGX-1, DGX-2, DGX Station, DLProf, GPU, JetPack, Jetson, Kepler, Maxwell, NCCL, Nsight Compute, Nsight Systems, NVCaffe, NVIDIA Ampere GPU architecture, NVIDIA Deep Learning SDK, NVIDIA Developer Program, NVIDIA GPU Cloud, NVLink, NVSHMEM, PerfWorks, Pascal, SDK Manager, T4, Tegra, TensorRT, TensorRT Inference Server, Tesla, TF-TRT, Triton Inference Server, Turing, and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Copyright

© 2017-2020 NVIDIA Corporation. All rights reserved.

(Redirected from Cross compile)
Program execution
General concepts
  • Translation
    • Compiler
  • Intermediate representation (IR)
  • Execution
    • Runtime system
Types of code
Compilation strategies
  • Just-in-time (JIT)
  • Ahead-of-time (AOT)
Notable runtimes
  • Android Runtime (ART)
  • Common Language Runtime (CLR) & Mono
  • Java virtual machine (JVM)
  • V8
Notable compilers & toolchains
  • GNU Compiler Collection (GCC)
  • LLVM

Mingw Linux Cross Compile

A cross compiler is a compiler capable of creating executable code for a platform other than the one on which the compiler is running. For example, a compiler that runs on a Windows 7PC but generates code that runs on Androidsmartphone is a cross compiler.

A cross compiler is necessary to compile code for multiple platforms from one development host. Direct compilation on the target platform might be infeasible, for example on a microcontroller of an embedded system, because those systems contain no operating system. In paravirtualization, one computer runs multiple operating systems and a cross compiler could generate an executable for each of them from one main source.

Cross compilers are distinct from source-to-source compilers. A cross compiler is for cross-platform software development of machine code, while a source-to-source compiler translates from one programming language to another in text code. Both are programming tools.

Use[edit]

The fundamental use of a cross compiler is to separate the build environment from target environment. This is useful in several situations:

  • Embedded computers where a device has extremely limited resources. For example, a microwave oven will have an extremely small computer to read its touchpad and door sensor, provide output to a digital display and speaker, and to control the machinery for cooking food. This computer will not be powerful enough to run a compiler, a file system, or a development environment. Since debugging and testing may also require more resources than are available on an embedded system, cross-compilation can be less involved and less prone to errors than native compilation.
  • Compiling for multiple machines. For example, a company may wish to support several different versions of an operating system or to support several different operating systems. By using a cross compiler, a single build environment can be set up to compile for each of these targets.
  • Compiling on a server farm. Similar to compiling for multiple machines, a complicated build that involves many compile operations can be executed across any machine that is free, regardless of its underlying hardware or the operating system version that it is running.
  • Bootstrapping to a new platform. When developing software for a new platform, or the emulator of a future platform, one uses a cross compiler to compile necessary tools such as the operating system and a native compiler.
  • Compiling native code for emulators for older now-obsolete platforms like the Commodore 64 or Apple II by enthusiasts who use cross compilers that run on a current platform (such as Aztec C's MS-DOS 6502 cross compilers running under Windows XP).

Use of virtual machines (such as Java's JVM) resolves some of the reasons for which cross compilers were developed. Stonegate vpn client for mac catalina. The virtual machine paradigm allows the same compiler output to be used across multiple target systems, although this is not always ideal because virtual machines are often slower and the compiled program can only be run on computers with that virtual machine.

Typically the hardware architecture differs (e.g. compiling a program destined for the MIPS architecture on an x86 computer) but cross-compilation is also applicable when only the operating system environment differs, as when compiling a FreeBSD program under Linux, or even just the system library, as when compiling programs with uClibc on a glibc host.

Canadian Cross[edit]

The Canadian Cross is a technique for building cross compilers for other machines. Given three machines A, B, and C, one uses machine A (e.g. running Windows XP on an IA-32 processor) to build a cross compiler that runs on machine B (e.g. running Mac OS X on an x86-64 processor) to create executables for machine C (e.g. running Android on an ARM processor). When using the Canadian Cross with GCC, there may be four compilers involved

  • The proprietary native Compiler for machine A (1) (e.g. compiler from Microsoft Visual Studio) is used to build the gcc native compiler for machine A (2).
  • The gcc native compiler for machine A (2) is used to build the gcc cross compiler from machine A to machine B (3)
  • The gcc cross compiler from machine A to machine B (3) is used to build the gcc cross compiler from machine B to machine C (4)

The end-result cross compiler (4) will not be able to run on build machine A; instead it would run on machine B to compile an application into executable code that would then be copied to machine C and executed on machine C.

For instance, NetBSD provides a POSIXUnix shell script named build.sh which will first build its own toolchain with the host's compiler; this, in turn, will be used to build the cross compiler which will be used to build the whole system.

The term Canadian Cross came about because at the time that these issues were under discussion, Canada had three national political parties.[1]

Timeline of early cross compilers[edit]

  • 1979 – ALGOL 68C generated ZCODE; this aided porting the compiler and other ALGOL 68 applications to alternate platforms. To compile the ALGOL 68C compiler required about 120kB of memory. With Z80 its 64kB memory is too small to actually compile the compiler. So for the Z80 the compiler itself had to be cross compiled from the larger CAP capability computer or an IBM System/370 mainframe.

GCC and cross compilation[edit]

GCC, a free software collection of compilers, can be set up to cross compile. It supports many platforms and languages.

GCC requires that a compiled copy of binutils be available for each targeted platform. Especially important is the GNU Assembler. Therefore, binutils first has to be compiled correctly with the switch --target=some-target sent to the configure script. GCC also has to be configured with the same --target option. GCC can then be run normally provided that the tools, which binutils creates, are available in the path, which can be done using the following (on UNIX-like operating systems with bash):

Cross compiling GCC requires that a portion of the target platform's C standard library be available on the host platform. The programmer may choose to compile the full C library, but this choice could be unreliable. The alternative is to use newlib, which is a small C library containing only the most essential components required to compile C source code.

The GNU autotools packages (i.e. autoconf, automake, and libtool) use the notion of a build platform, a host platform, and a target platform. The build platform is where the compiler is actually compiled. In most cases, build should be left undefined (it will default from host). The host platform is always where the output artifacts from the compiler will be executed whether the output is another compiler or not. The target platform is used when cross compiling cross compilers, it represents what type of object code the package itself will produce; otherwise the target platform setting is irrelevant.[2] For example, consider cross-compiling a video game that will run on a Dreamcast. The machine where the game is compiled is the build platform while the Dreamcast is the host platform. The names host and target are relative to the compiler being used and shifted like son and grand-son.[3]

Another method popularly used by embedded Linux developers involves the combination of GCC compilers with specialized sandboxes like Scratchbox, scratchbox2, or PRoot. These tools create a 'chrooted' sandbox where the programmer can build up necessary tools, libc, and libraries without having to set extra paths. Facilities are also provided to 'deceive' the runtime so that it 'believes' it is actually running on the intended target CPU (such as an ARM architecture); this allows configuration scripts and the like to run without error. Scratchbox runs more slowly by comparison to 'non-chrooted' methods, and most tools that are on the host must be moved into Scratchbox to function.

Manx Aztec C cross compilers[edit]

Manx Software Systems, of Shrewsbury, New Jersey, produced C compilers beginning in the 1980s targeted at professional developers for a variety of platforms up to and including PCs and Macs.

Manx's Aztec Cprogramming language was available for a variety of platforms including MS-DOS, Apple II, DOS 3.3 and ProDOS, Commodore 64, Macintosh 68XXX[4] and Amiga.

From the 1980s and continuing throughout the 1990s until Manx Software Systems disappeared, the MS-DOS version of Aztec C[5] was offered both as a native mode compiler or as a cross compiler for other platforms with different processors including the Commodore 64[6] and Apple II.[7] Internet distributions still exist for Aztec C including their MS-DOS based cross compilers. They are still in use today.

Manx's Aztec C86, their native mode 8086 MS-DOS compiler, was also a cross compiler. Although it did not compile code for a different processor like their Aztec C65 6502 cross compilers for the Commodore 64 and Apple II, it created binary executables for then-legacy operating systems for the 16 bit 8086 family of processors.

When the IBM PC was first introduced it was available with a choice of operating systems, CP/M-86 and PC DOS being two of them. Aztec C86 was provided with link libraries for generating code for both IBM PC operating systems. Throughout the 1980s later versions of Aztec C86 (3.xx, 4.xx and 5.xx) added support for MS-DOS 'transitory' versions 1 and 2[8] and which were less robust than the 'baseline' MS-DOS version 3 and later which Aztec C86 targeted until its demise.

Finally, Aztec C86 provided C language developers with the ability to produce ROM-able'HEX' code which could then be transferred using a ROM burner directly to an 8086 based processor. Paravirtualization may be more common today but the practice of creating low-level ROM code was more common per-capita during those years when device driver development was often done by application programmers for individual applications, and new devices amounted to a cottage industry. It was not uncommon for application programmers to interface directly with hardware without support from the manufacturer. This practice was similar to Embedded Systems Development today.

Thomas Fenwick and James Goodnow II were the two principal developers of Aztec-C. Fenwick later became notable as the author of the MicrosoftWindows CEkernel or NK ('New Kernel') as it was then called.[9]

Microsoft C cross compilers[edit]

Early history – 1980s[edit]

Microsoft C (MSC) has a shorter history than others[10] dating back to the 1980s. The first Microsoft C Compilers were made by the same company who made Lattice C and were rebranded by Microsoft as their own, until MSC 4 was released, which was the first version that Microsoft produced themselves.[11]

In 1987 many developers started switching to Microsoft C, and many more would follow throughout the development of Microsoft Windows to its present state. Products like Clipper and later Clarion emerged that offered easy database application development by using cross language techniques, allowing part of their programs to be compiled with Microsoft C.

Borland C (California company) was available for purchase years before Microsoft released its first C product.

Long before Borland, BSD Unix (Berkeley University) had gotten C from the authors of the C language: Kernighan and Ritche who wrote it in unison while working for AT&T (labs). K&R's original needs was not only elegant 2nd level parsed syntax to replace asm 1st level parsed syntax: it was designed so that a minimal amount of asm be written to support each platform (the original design of C was ability to cross compile using C with the least support code per platform, which they needed.). Also yesterdays C directly related to ASM code wherever not platform dependent. Today's C (more-so c++) is no longer C compatible and the asm code underlying can be extremely different than written on a given platform (in Linux: it sometimes replaces and detours library calls with distributor choices). Today's C is a 3rd or 4th level language which is used the old way like a 2nd level language.

1987[edit]

C programs had long been linked with modules written in assembly language. Most C compilers (even current compilers) offer an assembly language pass (that can be tweaked for efficiency then linked to the rest of the program after assembling).

Compilers like Aztec-C converted everything to assembly language as a distinct pass and then assembled the code in a distinct pass, and were noted for their very efficient and small code, but by 1987 the optimizer built into Microsoft C was very good, and only 'mission critical' parts of a program were usually considered for rewriting. In fact, C language programming had taken over as the 'lowest-level' language, with programming becoming a multi-disciplinary growth industry and projects becoming larger, with programmers writing user interfaces and database interfaces in higher-level languages, and a need had emerged for cross language development that continues to this day.

By 1987, with the release of MSC 5.1, Microsoft offered a cross language development environment for MS-DOS. 16 bit binary object code written in assembly language (MASM) and Microsoft's other languages including QuickBASIC, Pascal, and Fortran could be linked together into one program, in a process they called 'Mixed Language Programming' and now 'InterLanguage Calling'.[12] If BASIC was used in this mix, the main program needed to be in BASIC to support the internal runtime system that compiled BASIC required for garbage collection and its other managed operations that simulated a BASIC interpreter like QBasic in MS-DOS.

The calling convention for C code, in particular, was to pass parameters in 'reverse order' on the stack and return values on the stack rather than in a processor register. There were other programming rules to make all the languages work together, but this particular rule persisted through the cross language development that continued throughout Windows 16 and 32 bit versions and in the development of programs for OS/2, and which persists to this day. It is known as the Pascal calling convention.

Another type of cross compilation that Microsoft C was used for during this time was in retail applications that require handheld devices like the Symbol Technologies PDT3100 (used to take inventory), which provided a link library targeted at an 8088 based barcode reader. The application was built on the host computer then transferred to the handheld device (via a serial cable) where it was run, similar to what is done today for that same market using Windows Mobile by companies like Motorola, who bought Symbol.

Early 1990s[edit]

Throughout the 1990s and beginning with MSC 6 (their first ANSI C compliant compiler) Microsoft re-focused their C compilers on the emerging Windows market, and also on OS/2 and in the development of GUI programs. Mixed language compatibility remained through MSC 6 on the MS-DOS side, but the API for Microsoft Windows 3.0 and 3.1 was written in MSC 6. MSC 6 was also extended to provide support for 32-bit assemblies and support for the emerging Windows for Workgroups and Windows NT which would form the foundation for Windows XP. A programming practice called a thunk was introduced to allow passing between 16 and 32 bit programs that took advantage of runtime binding (dynamic linking) rather than the static binding that was favoured in monolithic 16 bit MS-DOS applications. Static binding is still favoured by some native code developers but does not generally provide the degree of code reuse required by newer best practices like the Capability Maturity Model (CMM).

MS-DOS support was still provided with the release of Microsoft's first C++ Compiler, MSC 7, which was backwardly compatible with the C programming language and MS-DOS and supported both 16 bit and 32 bit code generation.

Cross Compile For Mac On Linux Virtualbox

MSC took over where Aztec C86 left off. The market share for C compilers had turned to cross compilers which took advantage of the latest and greatest Windows features, offered C and C++ in a single bundle, and still supported MS-DOS systems that were already a decade old, and the smaller companies that produced compilers like Aztec C could no longer compete and either turned to niche markets like embedded systems or disappeared.

MS-DOS and 16 bit code generation support continued until MSC 8.00c which was bundled with Microsoft C++ and Microsoft Application Studio 1.5, the forerunner of Microsoft Visual Studio which is the cross development environment that Microsoft provide today.

Late 1990s[edit]

MSC 12 was released with Microsoft Visual Studio 6 and no longer provided support for MS-DOS 16 bit binaries, instead providing support for 32 bit console applications, but provided support for Windows 95 and Windows 98 code generation as well as for Windows NT. Link libraries were available for other processors that ran Microsoft Windows; a practice that Microsoft continues to this day.

MSC 13 was released with Visual Studio 2003, and MSC 14 was released with Visual Studio 2005, both of which still produce code for older systems like Windows 95, but which will produce code for several target platforms including the mobile market and the ARM architecture.

.NET and beyond[edit]

In 2001 Microsoft developed the Common Language Runtime (CLR), which formed the core for their .NET Framework compiler in the Visual Studio IDE. This layer on the operating system which is in the API allows the mixing of development languages compiled across platforms that run the Windows operating system.

The .NET Framework runtime and CLR provide a mapping layer to the core routines for the processor and the devices on the target computer. The command-line C compiler in Visual Studio will compile native code for a variety of processors and can be used to build the core routines themselves.

Cross Compile For Mac On Linux Bootable

Microsoft .NET applications for target platforms like Windows Mobile on the ARM architecture cross-compile on Windows machines with a variety of processors and Microsoft also offer emulators and remote deployment environments that require very little configuration, unlike the cross compilers in days gone by or on other platforms.

Runtime libraries, such as Mono, provide compatibility for cross-compiled .NET programs to other operating systems, such as Linux.

Libraries like Qt and its predecessors including XVT provide source code level cross development capability with other platforms, while still using Microsoft C to build the Windows versions. Other compilers like MinGW have also become popular in this area since they are more directly compatible with the Unixes that comprise the non-Windows side of software development allowing those developers to target all platforms using a familiar build environment.

Free Pascal[edit]

Free Pascal was developed from the beginning as a cross compiler. The compiler executable (ppcXXX where XXX is a target architecture) is capable of producing executables (or just object files if no internal linker exists, or even just assembly files if no internal assembler exists) for all OS of the same architecture. For example, ppc386 is capable of producing executables for i386-linux, i386-win32, i386-go32v2 (DOS) and all other OSes (see [13]). For compiling to another architecture, however, a cross architecture version of the compiler must be built first. The resulting compiler executable would have additional 'ross' before the target architecture in its name. i.e. if the compiler is built to target x64, then the executable would be ppcrossx64.

To compile for a chosen architecture-OS, the compiler switch (for the compiler driver fpc) -P and -T can be used. This is also done when cross compiling the compiler itself, but is set via make option CPU_TARGET and OS_TARGET. GNU assembler and linker for the target platform is required if Free Pascal does not yet have internal version of the tools for the target platform.


Clang[edit]

Clang is natively a cross compiler, at build time you can select which architectures you want Clang to be able to target.

See also[edit]

References[edit]

  1. ^'4.9 Canadian Crosses'. CrossGCC. Archived from the original on October 9, 2004. Retrieved 2012-08-08. This is called a `Canadian Cross' because at the time a name was needed, Canada had three national parties.
  2. ^https://www.gnu.org/s/libtool/manual/automake/Cross_002dCompilation.html
  3. ^https://mesonbuild.com/Cross-compilation.html
  4. ^'Obsolete Macintosh Computers'. Archived from the original on 2008-02-26. Retrieved 2008-03-10.
  5. ^Aztec C
  6. ^Commodore 64
  7. ^Apple II
  8. ^MS-DOS TimelineArchived 2008-05-01 at the Wayback Machine
  9. ^Inside Windows CE (search for Fenwick)
  10. ^Microsoft Language Utility Version History
  11. ^History of PC based C-compilersArchived December 15, 2007, at the Wayback Machine
  12. ^Which Basic Versions Can CALL C, FORTRAN, Pascal, MASM
  13. ^'Free Pascal Supported Platform List'. Platform List. Retrieved 2010-06-17. i386

External links[edit]

  • Cross Compilation Tools – reference for configuring GNU cross compilation tools
  • Building Cross Toolchains with gcc is a wiki of other GCC cross-compilation references
  • Scratchbox is a toolkit for Linux cross-compilation to ARM and x86 targets
  • Grand Unified Builder (GUB) for Linux to cross-compile multiple architectures e.g.:Win32/Mac OS/FreeBSD/Linux used by GNU LilyPond
  • Crosstool is a helpful toolchain of scripts, which create a Linux cross-compile environment for the desired architecture, including embedded systems
  • crosstool-NG is a rewrite of Crosstool and helps building toolchains.
  • buildroot is another set of scripts for building a uClibc-based toolchain, usually for embedded systems. It is utilized by OpenWrt.
  • ELDK (Embedded Linux Development Kit). Utilized by Das U-Boot.
  • T2 SDE is another set of scripts for building whole Linux Systems based on either GNU libC, uClibc or dietlibc for a variety of architectures
  • IBM has a very clear structured tutorial about cross-building a GCC toolchain.
  • (in French)Cross-compilation avec GCC 4 sous Windows pour Linux - A tutorial to build a cross-GCC toolchain, but from Windows to Linux, a subject rarely developed
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Cross_compiler&oldid=978410349'




broken image