Quantcast
Channel: Intermediate
Viewing all 664 articles
Browse latest View live

Intel® System Studio 2015 Beta for Linux Hosts Silent Installation Guide

$
0
0

Intel® System Studio 2015 Beta for Linux Hosts
"Silent" or non-interactive Installation Instructions

 

Navigation:

Linux and Mac OS X Compilers Installation Help Center: /en-us/articles/intel-compilers-linux-installation-help

 

Contents of this document:

 

 

Silent Command Line Installations

The Linux installation programs for Intel® System Studio 2015 Beta for Linux* Host are built using the PSET (Program Startup Experience Technologies) 2.0 core.  This PSET core is a framework of tools built by Intel to provide a robust set of installation and licensing features that are consistent across Intel product lines.  A similar PSET core is used for the Windows* and Mac OS* X installation packages as well.

One feature provided in the PSET core is support for the "silent" install.  Historically, "silent" really meant "non-interactive".  At this point, "silent" also means "does not report copious amounts of information", assuming there are no problems during the installation.  The silent install method allows the user to perform a command line installation of an entire package with no need to answer prompts or make product selections.

Silent Install Steps: "From Scratch"

To run the silent install, follow these steps:

  • For reasons outlined below we recommend that a working product license or server license is in place before beginning.  The file should be world-readable and located in a standard Intel license file directory, such as the default linux license directory /opt/intel/licenses.  For more details, keep reading.
  • Create or edit an existing silent install configuration file.  This file controls the behavior of the installation.  Here is an example file.  A similar file can be edited and placed in any directory on the target system.  After this example we explain the configuration file contents.
# silent.cfg
# Patterns used to check silent configuration file
#
# anythingpat - any string
# filepat     - the file location pattern (/file/location/to/license.lic)
# lspat       - the license server address pattern (0123@hostname)
# snpat      - the serial number pattern (ABCD-01234567)

# accept EULA, valid values are: {accept, decline}
ACCEPT_EULA=accept

# install mode for RPM system, valid values are: {RPM, NONRPM}
INSTALL_MODE=RPM

# optional error behavior, valid values are: {yes, no}
CONTINUE_WITH_OPTIONAL_ERROR=yes

# install location, valid values are: {/opt/intel, filepat}
PSET_INSTALL_DIR=/opt/intel

# continue with overwrite of existing installation directory, valid values are: {yes, no}
CONTINUE_WITH_INSTALLDIR_OVERWRITE=yes

# list of components to install, valid values are: {ALL, DEFAULTS, anythingpat}
COMPONENTS=DEFAULTS

# installation mode, valid values are: {install, modify, repair, uninstall}
PSET_MODE=install

# this one is optional
# directory for non-RPM database, valid values are: {filepat}
#NONRPM_DB_DIR=filepat

# Choose 1 of the 2 activation options - either serial or license
# license is needed if system does not have internet connectivity to Intel
#
# Serial number, valid values are: {snpat}
#ACTIVATION_SERIAL_NUMBER=snpat
#
# License file or license server, valid values are: {lspat, filepat}
#ACTIVATION_LICENSE_FILE=/put/a/full/path/and/licensefile.lic
#
# and based on the above, set the activation type: again, recommend using a license_file.
# exist_lic will look in the normal places for an existing license.
# Activation type, valid values are: {exist_lic, license_server, license_file, trial_lic, serial_number}
ACTIVATION_TYPE=exist_lic

# Sampling driver installation type, valid values are: {build, kit}
AMPLIFIER_SAMPLING_DRIVER_INSTALL_TYPE=build

# Power driver installation type, valid values are: {build, kit}
AMPLIFIER_POWER_DRIVER_INSTALL_TYPE=build

# Driver access group, valid values are: {anythingpat, vtune}
AMPLIFIER_DRIVER_ACCESS_GROUP=vtune

# Driver access permissions, valid values are: {anythingpat, 666}
AMPLIFIER_DRIVER_PERMISSIONS=666

# Load driver(s) into the kernel during installation, valid values are: {yes, no}
AMPLIFIER_LOAD_DRIVER=yes

# Path to C compiler, valid values are: {filepat, auto, none}
AMPLIFIER_C_COMPILER=auto

# Path to kernel source directory, valid values are: {filepat, auto, none}
AMPLIFIER_KERNEL_SRC_DIR=auto

# Path to make command, valid values are: {filepat, auto, none}
AMPLIFIER_MAKE_COMMAND=auto

# Install boot script to automatically reload the driver(s) on system restart, valid values are: {yes, no}
AMPLIFIER_INSTALL_BOOT_SCRIPT=yes

# Enable per-user collection mode for Sampling driver, valid values are: {yes, no}
AMPLIFIER_DRIVER_PER_USER_MODE=no

# Intel(R) Software Improvement Program opt-in, valid values are: {yes, no}
PHONEHOME_SEND_USAGE_DATA=no

# Perform validation of digital signatures of RPM files, valid values are: {yes, no}
SIGNING_ENABLED=yes

Running the Silent Installation

Once you have created your silent installation configuration file, installation is quite simple.  First, extract the compiler full package tar file in a temporary directory.  For purposes of this example we use /tmp as our temporary directory.  You may use any directory in which you have full file permissions.  Do not untar the package in the directory where you intend to install the compiler, the temporary directory should be disjoint from your final target installation directory.

Untar the compiler package (assumes the package is copied to /tmp).  Your compiler version and package name may differ than that shown below:

  1. cd /tmp
  2. tar -zxvf  l_cembd_b_2015.0.020.tgz 

Now cd to the extracted directory

  1. cd l_cembd_b_2015.0.020

Run  the install.sh installer program, passing the full path to your configuration file with the --silent option

  1. ./install.sh --silent /tmp/silent.cfg

where "silent.cfg" is replaced by the name you used to create your silent configuration file.   You may use any name for this file.

DONE.  If your configuration file is accepted the installer will now progress with the installation without further input from you, and no output will appear unless there is an error.

CONFIGURATION FILE FORMAT

A few comments on the directives inside the silent install configuration file:

ACCEPT_EULA=accept

  • This directive tells the install program that the invoking user has agreed to the End User License Agreement or EULA.  This is a mandatory option and MUST be set to 'accept'. If this is not present in the configuration file, the installation will not complete.  By using the silent installation program you are accepting the EULA.
  • The EULA is in a plain text file in the same directory as the installer.  It has file name "license".  Read this before proceeding as using the silent installer means you have read and agree to the EULA.  If you have questions, go to our user forum: https://software.intel.com/en-us/forums/intel-software-development-products-download-registration-licensing

INSTALL_MODE=RPM

  • This directive tells the install program that the RPM method should be used to install the software.  This will only work if the install user is "root" or has full root priveleges and your distribution support RPM for package management.  In some cases, where the operating system of the target system does not support RPM or if the install program detects that the version of RPM supported by the operating system is flawed or otherwise incompatible with the install program, the installation will proceed but will switch to non-RPM mode automatically.  This is the case for certain legacy operating systems (e.g. SLES9) and for operating systems that provide an RPM utility, but do not use RPM to store or manage system-installed operating system infrastructure (e.g. Ubuntu, Debian).  THUS, Ubuntu and Debian users set this to INSTALL_MODE=NONRPM.

  • If the you do not want to use RPM, then this line should read "INSTALL_MODE=NONRPM".  In this case, the products will be installed to the same location, but instead of storing product information in the system's RPM database, the Intel product install information will be stored in a flat file called "intel_sdp_products.db", usually stored in /opt/intel (or in $HOME/intel for non-root users).  To override this default, use configuration file directive NONRPM_DB_DIR

​NONRPM_DB_DIR

  • If INSTALL_MODE=NONRPM the directive NONRPM_DB_DIR can be used to override the default directory for the installation database.  The default is /opt/intel or in $HOME/intel for non-root users.  The format for this directive is:
  • NONRPM_DB_DIR=/path/to/your/db/directory

ACTIVATION=exist_lic

  • This directive tells the install program to look for an existing license during the install process.  This is the preferred method for silent installs.  Take the time to register your serial number and get a license file (see below).  Having a license file on the system simplifies the process.  In addition, as an administrator it is good practice to know WHERE your licenses are saved on your system.  License files are plain text files with a .lic extension.  By default these are saved in /opt/intel/licenses which is searched by default.  If you save your license elsewhere, perhaps under an NFS folder, set environment variable INTEL_LICENSE_FILE to the full path to your license file prior to starting the installation or use the configuration file directive ACTIVATION_LICENSE_FILE to specify the full pathname to the license file.
  • Options for ACTIVATION are { exist_lic, license_file, server_lic, serial_number, trial_lic }
    • exist_lic directs the installer to search for a valid license on the server.  Searches will utilize the environment variable INTEL_LICENSE_FILE, search the default license directory /opt/intel/licenses, or use the ACTIVATION_LICENSE_FILE directive to find a valid license file.
    • license_file is similar to exist_lic but directs the installer to use ACTIVATION_LICENSE_FILE to find the license file.
    • server_lic is similar to exist_lic and exist_lic but directs the installer that this is a client installation and a floating license server will be contacted to active the product.  This option will contact your floating license server on your network to retrieve the license information.  BEFORE using this option make sure your client is correctly set up for your network including all networking, routing, name service, and firewall configuration.  Insure that your client has direct access to your floating license server and that firewalls are set up to allow TCP/IP access for the 2 license server ports.  server_lic will use INTEL_LICENSE_FILE containing a port@host format OR a client license file.  The formats for these are described here https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license
    • serial_number directs the installer to use directive ACTIVATION_SERIAL_NUMBER for activation.  This method will require the installer to contact an external Intel activation server over the Internet to confirm your serial number.  Due to user and company firewalls, this method is more complex and hence error prone of the available activation methods.  We highly recommend using a license file or license server for activation instead.
    • trial_lic is used only if you do not have an existing license and intend to temporarily evaluate the compiler.  This method creates a temporary trial license in Trusted Storage on your system.
  • No license file but you have a serial number?  If you have only a serial number, please visit https://registrationcenter.intel.com to register your serial number.  As part of registration, you will receive email with an attached license file.  If your serial is already registered and you need to retrieve a license file, read this:  https://software.intel.com/en-us/articles/how-do-i-manage-my-licenses
  • Save the license file in /opt/intel/licenses/ directory, or in your preferred directory and set INTEL_LICENSE_FILE environment variable to this non-default location.  If you have already registered your serial number but have lost the license file, revisit https://registrationcenter.intel.com and click on the hyperlinked product name to get to a screen where you can cut and paste or mail yourself a copy of your registered license file.
  • Still confused about licensing?  Go to our licensing FAQS page https://software.intel.com/en-us/articles/licensing-faq

ACTIVATION_LICENSE_FILE

  • This directive instructs the installer where to find your named-user or client license.  The format is:
  • ACTIVATION_LICENSE_FILE=/use/a/path/to/your/licensefile.lic  where licensefile.lic is the name of your license file.

CONTINUE_WITH_OPTIONAL_ERROR

  • This directive controls behavior when the compiler encounters an "optional" error.  These errors are non-fatal errors and will not prevent the installation to proceed if the user has set CONTINUE_WITH_OPTIONAL_ERROR=yes.  Examples of optional errors include an unrecognized or unsupported linux distribution or version or certain prerequisites for a product cannot be found at the time of installation (such as a supported Java runtime or missing 32bit development libraries for 32bit tool installation).   Fatal errors found during installation will cause the installer to abort with appropriate messages printed.
  • CONTINUE_WITH_OPTIONAL_ERROR=yes directs the installer to ignore non-fatal installation issues and continue with the installation.
  • CONTINUE_WITH_OPTIONAL_ERROR=no directs the installer to abort with appropriate warning messages for the non-fatal error found during the installation.

PSET_INSTALL_DIR

  • This directive specifies the target directory for the installation.  The Intel Compilers default to /opt/intel for installation target.  Set this directive to the root directory for the final compiler installation.

CONTINUE_WITH_INSTALLDIR_OVERWRITE

  • Determines the behavior of the installer if the PSET_INSTALL_DIR already contains a existing installation of this specific compiler version. The Intel compiler allows co-existence of multiple versions on a system.  This directive does not affect this behavior, each version of the compiler will have a unique installation structure that does not overwrite other versions.  This directive dictates behavior when the SAME VERSION is already installed in the PSET_INSTALL_DIR.
  • CONTINUE_WITH_INSTALLDIR_OVERWRITE=yes directs the installer to overwrite the existing compiler version of the SAME VERSION
  • CONTINUE_WITH_INSTALLDIR_OVERWRITE=no directs the installer to exit if an existing compiler installation of the SAME VERSION already exists in PSET_INSTALL_DIR

COMPONENTS

  • A typical compiler package contains multiple sub-packages, such as MKL, IPP, TBB, Debugger, etc.  This directive allows the user to control which sub-packages to install.
  • COMPONENTS=DEFAULTS directs the installer to install the pre-determined default packages for the compiler (recommended setting).  The defaults may not include some sub-packages deemed non-essential or special purpose.  An example is the cluster components of MKL, which are only needed in a distributed memory installation.  If you're not sure of the defaults you can do a trial installation of the compiler in interactive mode and select CUSTOMIZE installation to see and select components.
  • COMPONENTS=ALL directs the installer to install all packages for the compiler.
  • COMPONENTS=<pattern> allows the user to specify which components to install.  The components vary by compiler version and package.  The components should be enclosed in double-quotes and semi-colon separated.  For a list of component, grep for the <Abbr> tags in <installation directory>/uninstall/mediaconfig.xml, such as this :
    • cd <compiler root>/uninstall
    • grep Abbr mediaconfig.xml
    • note that the list may have close to or over 100 components.

PSET_MODE

  • Sets the installer mode.  The installer can install, remove, modify, or repair an installation.
  • PSET_MODE=install directs the installer to perform an installation
  • PSET_MODE=remove directs the installer to remove a previous installation.  If multiple versions of the compiler are installed, the installer removes the most recent installation.  This information is kept in the RPM database or the non-rpm database depending on the mode used for the installation.
  • PSET_MODE=modify allows the user to redo an installation.  The most common scenario is to overwrite an existing installation with more COMPONENTS set or unset.
  • PSET_MODE=repair directs the installer to retry an installation again, checking for missing or damaged files, directories, and symbolic links, permissions, etc.

PHONEHOME_SEND_USAGE_DATA

  • This directive guides the installer in the user's intent for the optional Intel Software Improvement Program.  This setting determines whether or not the compiler periodically sends customer usage information back to Intel.  The intent is for Intel to gather information on what compiler options are being used, amongst other information.  More information on the Intel Software Improvement Program can be found here: https://software.intel.com/en-us/articles/software-improvement-program.
  • PHONEHOME_SEND_USAGE_DATA=no directs the installer to configure the compiler to not send usage data back to the Intel Software Improvement Program.
  • PHONEHOME_SEND_USAGE_DATA=yes directs the installer to configure the compiler to send usage data back to the Intel Software Improvement Program.  Setting this to YES is your consent to opt-into this program.

AMPLIFIER_SAMPLING_DRIVER_INSTALL_TYPE=build

  • Sampling driver installation type, valid values are: {build, kit}

  • VTune sampling driver installation control - either install a pre-built kernel module from the package or build the driver on your system. You can use the 'kit' driver if you have a support Linux distribution and version.  See the VTune installation and user guide for details. In order to 'build' you will need to have kernel sources installed on your system.  

AMPLIFIER_POWER_DRIVER_INSTALL_TYPE

  • Power driver installation type, valid values are: {build, kit}

  • VTune driver or kernel module for power sampling.  'kit' means there is a pre-built driver for your OS in the package. You can use the 'kit' driver if you have a support Linux distribution and version.  See the VTune installation and user guide for details. In order to 'build' you will need to have kernel sources installed on your system.  You can use the 'kit' driver if you have a support Linux distribution and version.  See the VTune installation and user guide for details.

AMPLIFIER_DRIVER_ACCESS_GROUP

  • Driver access group, valid values are: {anythingpat, vtune}

  • The GROUP (GID) to use for the installation.  Allows group access to the VTune drivers.  Typically we recommend setting up a group named 'vtune' with access to those allowed access to the VTune sampling drivers.

AMPLIFIER_DRIVER_PERMISSIONS

  • Driver access permissions, valid values are linux file permissions setting: {anythingpat, 666}

AMPLIFIER_LOAD_DRIVER

  • Load driver(s) into the kernel during installation, valid values are: {yes, no}

AMPLIFIER_C_COMPILER=auto

  • Path to C compiler, valid values are: {filepat, auto, none}

  • To use another compiler, give the full path and compiler name such as 'gcc', 'icc', or other.  Default is gcc

AMPLIFIER_KERNEL_SRC_DIR

  • Used only if you build the VTune drivers from sources.  This gives the path to the kernel sources.
  • Path to kernel source directory, valid values are: {filepat, auto, none}

AMPLIFIER_MAKE_COMMAND

  • Used only if you build the VTune drivers from sources.  This gives the path to the system 'make' command
  • valid values are: {filepat, auto, none}

AMPLIFIER_INSTALL_BOOT_SCRIPT

  • controls whether a boot script is installed on the system to automatically load the sampling drivers on system boot.
  • valid values are: {yes, no}

AMPLIFIER_DRIVER_PER_USER_MODE

  • Enable per-user collection mode for Sampling driver, valid values are: {yes, no}

SIGNING_ENABLED

  • Directs the installer whether or not to check RPM digital signatures.  Checking signatures is recommended.  It allows the installer to find data corruption from such things as incomplete downloads of compiler packages or damaged RPMs.
  • SIGNING_ENABLED=yes directs the installer to check RPM digital signatures.
  • SIGNING_ENABLED=no directs the installer to skip the checking of RPM digital signatures.

 

Silent Install Steps: "Copy and Repeat" Method for Silent Configuration File Creation

If you need to make the same sort of installation over and over again, one way to get the silent installation configuration file right the first time is to run the installation program once interactively, using the options that meet the local needs, and record these options into a configuration file that can be used to replicate this same install via silent install for future installations.

To do this, the user simply needs to add the "duplicate" option to the script invocation, and run a normal interactive install, as follows:

  • prompt> ./install.sh --duplicate /tmp/silent.cfg

This "dash dash duplicate" option will put the choices made by you into the file specified on the command line.  You can modify this recorded configuration file as appropriate and use it to perform future silent installations. 

RPM Command Line Installations

The files associated the Linux Compiler Professional products are stored in "RPM" files.  RPMs (short for Red Hat Package Manager).  They are grouped according to certain file type guidelines.  Each major product component will consist of one more or of these RPMs.  For non-RPM systems and for users who choose to install the product without using the RPM database of their target systems, an "underneath the hood" utility is embedded inside the installation program tools to extract the contents of the RPM files.

RPM Embedded Installation Functionality

The Linux System Studio 2015 packaging includes RPM files that also contain embedded installation functionality.  This means that key install behaviors such as environment script updating and symbolic link creation, which used to be only in the install program itself, are now embedded in the RPM files.  As a result, the experienced user can make use of the RPM files directly in order to install and remove Intel System Studio 2015 for Linux hosts products.

Warning: this is truly for the experienced, Linux system savvy user.  Most RPM command capabilities require root privileges.  Improper use of rpm commands can corrupt and destroy a working system.

 

The changes done for the Linux compiler products are intended to ease the job of deploying in enterprise deployments, including cluster environments. 

Product Layout for Intel® System Studio 2015

Here is an example for <tmpdir>/l_cembd_b_2015.0.020/  which is the beta version "_b_" 2015 package update 0, build 020

Top directory contents of a typical package:

  • cd_eject.sh - CD eject script used by install.sh
  • install.sh - install script
  • install_GUI.sh - GUI front-end to the installer using X11. Only used for interactive, graphical installation method.
  • license.txt - end user license agreement
  • support.txt - package version and contents information
  • pset - installation and licensing content directory used by the Intel installers
  • rpm - directory containing all product content in RPM file format, plus the EULA and LPGL license
  • silent.cfg - a sample silent configuration file.  You can use this as a template.

This is an example ISS 2015 Beta rpm directory.  This directory listing is for the Update 0 build 020 ISS 2015 Beta release, your version strings will vary by compiler versions: https://software.intel.com/en-us/articles/intel-compiler-and-composer-update-version-numbers-to-compiler-version-number-mapping   NOTE:  this is not intended to be a comprehensive list for every compiler.  RPMs vary by compiler edition, components, and may vary by release.  Please list your 'rpm' directory for a list specific to your compiler.  The following is intended as a representative list:

intel-cembd-common-2015b-020.noarch.rpm                    intel-cembd-mkl-32b-020-11.2-2.noarch.rpm
intel-cembd-common-pset-2015b-020.noarch.rpm               intel-cembd-mkl-64b-020-11.2-2.noarch.rpm
intel-cembd-compilerpro-android-32b-020-15.0-0.noarch.rpm  intel-cembd-mkl-common-020-11.2-2.noarch.rpm
intel-cembd-compilerproc-32b-020-15.0-0.noarch.rpm         intel-cembd-mkl-devel-32b-020-11.2-2.noarch.rpm
intel-cembd-compilerproc-64b-020-15.0-0.noarch.rpm         intel-cembd-mkl-devel-64b-020-11.2-2.noarch.rpm
intel-cembd-compilerproc-common-020-15.0-0.noarch.rpm      intel-cembd-mkl-gnu-32b-020-11.2-2.noarch.rpm
intel-cembd-compilerproc-devel-32b-020-15.0-0.noarch.rpm   intel-cembd-mkl-gnu-64b-020-11.2-2.noarch.rpm
intel-cembd-compilerproc-devel-64b-020-15.0-0.noarch.rpm   intel-cembd-mkl-gnu-devel-32b-020-11.2-2.noarch.rpm
intel-cembd-compilerpro-common-020-15.0-0.noarch.rpm       intel-cembd-mkl-gnu-devel-64b-020-11.2-2.noarch.rpm
intel-cembd-compilerpro-devel-32b-020-15.0-0.noarch.rpm    intel-cembd-mkl-sp2dp-64b-020-11.2-2.noarch.rpm
intel-cembd-compilerpro-devel-64b-020-15.0-0.noarch.rpm    intel-cembd-mkl-sp2dp-devel-64b-020-11.2-2.noarch.rpm
intel-cembd-compilerpro-gfx-64b-020-15.0-0.noarch.rpm      intel-cembd-ocd-020-0.8.0-0.i486.rpm
intel-cembd-compilerpro-vars-020-15.0-0.noarch.rpm         intel-cembd-ocd-020-0.8.0-0.x86_64.rpm
intel-cembd-gdb-common-020-7.7-0.noarch.rpm                intel-cembd-ocd-src-020-0.8.0-0.noarch.rpm
intel-cembd-gdb-core-020-7.7-0.i486.rpm                    intel-cembd-openmp-32b-020-15.0-0.noarch.rpm
intel-cembd-gdb-core-020-7.7-0.x86_64.rpm                  intel-cembd-openmp-64b-020-15.0-0.noarch.rpm
intel-cembd-gdb-python-src-020-7.7-0.noarch.rpm            intel-cembd-openmp-devel-32b-020-15.0-0.noarch.rpm
intel-cembd-gdb-server-32b-020-7.7-0.noarch.rpm            intel-cembd-openmp-devel-64b-020-15.0-0.noarch.rpm
intel-cembd-gdb-server-64b-020-7.7-0.noarch.rpm            intel-cembd-sven-sdk-020-1.0-0.noarch.rpm
intel-cembd-gdb-src-020-7.7-0.noarch.rpm                   intel-cembd-sven-viewer-020-1.0-0.noarch.rpm
intel-cembd-gdb-target-galileo-020-7.7-0.noarch.rpm        intel-cembd-target-2015b-020.noarch.rpm
intel-cembd-ipp-common-020-8.2-0.noarch.rpm                intel-cembd-xdb-020-2015-0.i486.rpm
intel-cembd-ipp-common-devel-32b-020-8.2-0.noarch.rpm      intel-cembd-xdb-020-2015-0.x86_64.rpm
intel-cembd-ipp-common-devel-64b-020-8.2-0.noarch.rpm      intel-cembd-xdb-common-020-2015-0.noarch.rpm
intel-cembd-ipp-mt-devel-32b-020-8.2-0.noarch.rpm          intel-cembd-xdb-dal-020-2015-0.x86_64.rpm
intel-cembd-ipp-mt-devel-64b-020-8.2-0.noarch.rpm          intel-cembd-xdb-eclipse-020-2015-0.i486.rpm
intel-cembd-ipp-st-ac-32b-020-8.2-0.noarch.rpm             intel-cembd-xdb-eclipse-020-2015-0.x86_64.rpm
intel-cembd-ipp-st-ac-64b-020-8.2-0.noarch.rpm             intel-cembd-xdb-kernel-020-2015-0.i486.rpm
intel-cembd-ipp-st-devel-32b-020-8.2-0.noarch.rpm          intel-cembd-xdb-kernel-020-2015-0.x86_64.rpm
intel-cembd-ipp-st-devel-64b-020-8.2-0.noarch.rpm          intel-gpa-14.2.225646-1.i386.rpm
intel-cembd-ipp-st-di-32b-020-8.2-0.noarch.rpm             intel-gpa-14.2.225646-1.x86_64.rpm
intel-cembd-ipp-st-di-64b-020-8.2-0.noarch.rpm             intel-inspector-sys-2015-cli-362926-15.0-2.i486.rpm
intel-cembd-ipp-st-gen-32b-020-8.2-0.noarch.rpm            intel-inspector-sys-2015-cli-362926-15.0-2.x86_64.rpm
intel-cembd-ipp-st-gen-64b-020-8.2-0.noarch.rpm            intel-inspector-sys-2015-cli-common-362926-15.0-2.noarch.rpm
intel-cembd-ipp-st-jp-32b-020-8.2-0.noarch.rpm             intel-inspector-sys-2015-cli-pset-362926-15.0-2.noarch.rpm
intel-cembd-ipp-st-jp-64b-020-8.2-0.noarch.rpm             intel-inspector-sys-2015-doc-362926-15.0-2.noarch.rpm
intel-cembd-ipp-st-mx-32b-020-8.2-0.noarch.rpm             intel-inspector-sys-2015-gui-362926-15.0-2.i486.rpm
intel-cembd-ipp-st-mx-64b-020-8.2-0.noarch.rpm             intel-inspector-sys-2015-gui-362926-15.0-2.x86_64.rpm
intel-cembd-ipp-st-rr-32b-020-8.2-0.noarch.rpm             intel-inspector-sys-2015-gui-common-362926-15.0-2.noarch.rpm
intel-cembd-ipp-st-rr-64b-020-8.2-0.noarch.rpm             intel-vtune-amplifier-sys-2015-common-pset-362999-15.0-0.noarch.rpm
intel-cembd-ipp-st-sc-32b-020-8.2-0.noarch.rpm             intel-vtune-amplifier-sys-2015-gui-362999-15.0-0.i486.rpm
intel-cembd-ipp-st-sc-64b-020-8.2-0.noarch.rpm             intel-vtune-amplifier-sys-2015-gui-362999-15.0-0.x86_64.rpm
intel-cembd-ipp-st-vc-32b-020-8.2-0.noarch.rpm             intel-vtune-amplifier-sys-2015-gui-common-362999-15.0-0.noarch.rpm
intel-cembd-ipp-st-vc-64b-020-8.2-0.noarch.rpm

Installing Compilers With the RPM Command Line

To install a Linux compiler solution set via RPM command line, you should first ensure that a working license file or other licensing method (such as floating or network-served licenses) is already in place.  There is no license checking performed during RPM installation.  However, if you install without a license file you will get an 'cannot check out license' error when you try to use the compiler.

You are assumed to have complied with the End User License Agreement (EULA) if you are performing an RPM command line installation.  The EULA is present in the parent installation directory ( license or license.txt file).  Please read this license agreement.  It is assumed you agree to this license agreement if you proceed with an rpm installation.

Once a license file or license method is in place, the user can install the products directly with these simple steps:

  • Login as root or 'su' to root
  • ISS 2015:  'cd' to the package/rpm directory ( e.g. /tmp/l_cembd_b_2015.0.020/rpm )
  • Run the RPM install command
    • rpm -i *.rpm

This completes without error in most cases.  If some system-level prerequisites, for required system libraries for example, are not met by the target operating system, a dependency warning may be returned by the rpm install.  There are no embedded detailed dependency checks inside the RPM install capabilities for required commands such as g++ or for optional requirements such as a valid supported operating system or supported JRE.  The embedded requirements are kept simple to ease installation for the general case, with an  exception.  The exception is the requirement for a /usr/lib/libstdc++.so.6 library to exist on the target system, and must match in 64bit or 32bit (there will be 2 copies of this library, one 64bit and one 32bit in 2 separate /lib paths, if you wish to be able to compile in 64bits and 32bits). 

The second requirement is that the target operating system have at least the 3.0 version of "lsb" component installed.  Availability of this LSB component will, in the vast majority of cases, also ensure that other necessary system level libraries are available.  See LSB Support below for more information on getting the 'lsb' capability onto a target system.

If you believe that you have effectively installed the correct requirements on the target system and the dependency failures still persist, there is a fallback option, the "--nodeps" (dash dash nodeps) rpm switch.  Invoking 'rpm -i' with the --nodeps option will allow the rpm installation to succeed in most cases.

  • prompt>  rpm -i --nodeps *.rpm

Again, this will get you past the perceived dependency issues, which may be unique to a particular distribution of Linux and not really a problem for the resulting installation.  But there is no assurance of complete success other than testing the resulting installation.

Other Special RPM Install Cases

If you are installing RPMs using the rpm command line, but using a multi-architecture package (such as the "combo" IA-32 / Intel64 package or a DVD package), you may want to install all of the RPMs that match their specific target machine's architecture.  Or, if you are installing onto an Intel64 system and want to include both the IA-32 and Intel64 components, you may want both of these included.  Here are some example rpm command line invocations:

  • prompt>  rpm  -i  *.noarch.rpm  *.486.rpm
    • ​installs all components needed for operation on IA-32 architecture
  • prompt>  rpm  -i  *.noarch.rpm  *.i486.rpm   *.x86_64.rpm
    • installs all components needed for operation on both IA-32 and Intel 64 architecture

Certain Linux distributions do not like the idea of two RPM files having the same base name.  For example, the rpm versions of certain distros might complain that there is more than one instance of  intel-cproc023-11.1-1 on the command line when installing both the IA-32 and Intel64 RPMs onto the same machine.  For these distros, use the "--force" ( dash dash force ) command line switch:

  • prompt>  rpm  -i  --force  *.noarch.rpm  *.i486.rpm  *.x86_64.rpm

Customizing the RPM Command Line

The rpm command has a long list of available options, including hooks to install from FTP and HTTP RPM repositories, features to examine contents of installed RPM-based programs and uninstalled RPM package files, etc.  Most of these are beyond the scope of this document.  See the Links section for references to external documentation on RPM.  Here are a couple of additional RPM switches, however, which may be routinely useful.

  • prompt>  rpm  -i  --prefix  /my_NFS_dir/intel_compiler_directory/this_version  *.rpm
    • ​instructs rpm to use directory /my_NFS_dir/intel_compiler_directory/this_version as the root installation directory
  • ​prompt>  rpm  -i --replacefiles  *.rpm
    • ​directs rpm to replace any existing files using the new rpm files
  • ​prompt>  rpm  -i --replacepkgs  *.rpm 
    • directs ​rpm to replace any existing package on the system using the new RPM files, even if they are already installed ... this may be useful in test applications where newer versions of a package with the same name are being tested

Uninstallation Using RPM

Since the installation of Intel Linux compiler packages includes in its deliver all of the uninstall scripts, the easiest way to perform a product uninstall is to simply run the uninstall script that is created by the install process.  If you have a need to automate rpm-based uninstalls, however, a couple of "tricks" can be employed to make this simpler.  These should be used with caution, as with any system command performed from a privileged account.

Here is an example command line that will remove all RPM packages from a Linux hosted ISS 2015 package number "020":

  • rpm  -e  --allmatches `rpm -qa | grep intel- | grep 020`
    • ​note use of back-quotes
    • note that this only removes compiler packages.  You may wish to use a similar method to remove intel-mkl, intel-ipp, intel-gdb, intel-openmp and other intel packages

Some Linux distributions will also complain about "multiple matches" during the uninstall process.  In this case, the "--allmatches" switch mentioned above can also be employed here.

A Short Word on Updates

The rpm structure and command set support the application of updates or "patches" to existing installations.  For example a util-1.1-2.rpm package may be issued that adds fixed content to some pre-existing util-1.1-1.rpm.   The existing release process for Linux hosted ISS 2015 includes support for "version co-existence" or multiple installs of separate product versions.  So each new iteration of the product is unique from the previous version.  This means that Intel compiler packages are not available in "patch" form.  All product releases are stand-alone versions.  So use of the 'rpm -U' upgrade capability is not supported by our product delivery model at this time. 

 

LSB Support

LSB, or Linux Standard Base, is an effort sponsored by the Linux Foundation (http://www.linuxfoundation.org) to improve the interoperability of Linux operating systems and application software.  Intel is a major participant in Linux Foundation activities and has embraced LSB as a viable means of improving our products and our customers' use of those products.  To that end, we have included establishing LSB compliance as a part of our goals for our products and software packages in the future.

For the purposes of the Intel System Studio 2015 for Linux Hosts our primary objective is to product packages that adhere to LSB packaging requirements.  Most of the RPM changes mentioned above were done for this purpose.  To be specific, however, we should draw a distinction between product compliance and package compliance.  Because our compiler products must support a vast array of legacy constructs, the applications themselves may or may not be "certifiable" within the LSB guidelines, but our packages, i.e. our RPMs and install programs should be.  This is the primary reason for inclusion of the "lsb >= 3.0" embedded requirements being added to our RPMs.

Some of these Linux distributions come with LSB support already included in the operating system by default (e.g. SLES11).  For others, an external or optional package must be installed.  If supporting an environment that is using RPM command line installation and want to enable that site / system / systems to be able to install without using the dreaded "--nodeps" option, the best best is to acquire and install the companion LSB solution for that operating system.

The Linux Foundation website contains links to download resources for LSB, as to many of the vendor-specific support sites.  Check out these sites for information on adding LSB support to an existing operating system.

For RPM-based systems, a user can check on the status of LSB for their system, using a command like this:

  • prompt>  rpm -q --provides lsb

This will tell if an 'lsb' RPM package is already installed and, if so, what version.

For our non-RPM supported operating systems, Ubuntu and Debian, the privileged user can use the Debian 'apt-get' facility to easily install the latest version of LSB supported by the specific distribution:

  • prompt> apt-get install lsb

Uninstall Instructions

As mentioned above, a standard uninstall script is included with each product installation, regardless of whether the install was performed using menu installs, RPM command line installs, or "silent" installs.  In all cases, using the provided uninstall script should work and is the usual preferred method of removing installed product..  There is one uninstall feature, however, that is undocumented and can be used to make life a little easier.  Here's an example invocation of that feature:

  • prompt>  /opt/intel/composer_xe_2015.<update>.<build>/bin/uninstall.sh  --default

This "--default" ( dash dash default ) option tells the uninstall script to use the "remove all" option and remove any compiler components associated with the specific package (in this case all  components, including C/C++, Fortran, IDB, MKL, TBB, and IPP, if installed).  There is no uninstall program interaction when this switch is used.

Uninstallation Using RPM

Since the installation of Intel Linux compiler packages includes in its deliver all of the uninstall scripts, the easiest way to perform a product uninstall is to simply run the uninstall script that is created by the install process.  If you have a need to automate rpm-based uninstalls, however, a couple of "tricks" can be employed to make this simpler.  These should be used with caution, as with any system command performed from a privileged account.

Here is an example command line that will remove all RPM packages from a Linux hosted ISS 2015 package number "020":

  • rpm  -e  --allmatches `rpm -qa | grep intel- | grep 020`
    • ​note use of back-quotes
    • note that this only removes compiler packages.  You may wish to use a similar method to remove intel-mkl, intel-ipp, intel-gdb, intel-openmp and other intel packages

Some Linux distributions will also complain about "multiple matches" during the uninstall process.  In this case, the "--allmatches" switch mentioned above can also be employed here.

 

Note

As noted in the Intel® Software Development Product End User License Agreement, the Intel® Software Development Product you install will send Intel the product’s serial number and other system information to help Intel improve the product and validate license compliance. No personal information will be transmitted.

Links of Interest

The following links are provided for reference information.

Excellent on-line resource for understanding RPMs and their usage.

Navigation:

For more complete information about compiler optimizations, see our Optimization Notice.

 
  • Intel System Studio 2015 silent installation
  • Sviluppatori
  • Android*
  • Linux*
  • Android*
  • C/C++
  • Avanzato
  • Principiante
  • Intermedio
  • Strumenti di sviluppo per Android*
  • Strumenti di sviluppo
  • Embedded
  • URL
  • Argomenti sui compilatori
  • Per iniziare
  • Area tema: 

    IDZone

    Asynchronous Offload - C++ Code Examples

    $
    0
    0


    This document provides information about asynchronous data transfer, asynchronous computation and memory management without data transfer. This document includes code examples of common usage scenarios. The examples in this article are in C/C++ only.

    Introduction

    Two different C++ pragmas are used for data transfer and wait for completion.
    The pragma for data transfer only, with asynchronous option is:

    #pragma offload_transfer <clauses> [ signal(<tag>) ]

    The pragma to wait for completion of asynchronous activity is

    #pragma offload_wait <clauses> wait(<tag>)

    The offload pragma also takes optional signal/wait clauses

    #pragma offload <clauses> [ signal(<tag>) ] [ wait(<tag>) ]<statement>

    The offload_transfer and offload_wait pragmas are stand-alone and do not apply to the subsequent code block.

    Data Transfer

    The offload_transfer pragma is a stand-alone pragma, meaning that no statement succeeds it. This pragma contains a target clause and either all in clauses, or all out clauses. Without a signal clause, offload_transfer initiates and completes a synchronous data transfer. With a signal clause, initiates the data transfer only. The offload_transfer pragma can also take a wait clause. A later pragma with wait clause is used to wait for data transfer completion. Expressions in signal and wait clauses are address-sized values that serve as tags on the asynchronous operation.

    // Example 1:
    // Synchronous data transfer CPU -> MIC
    // Next statement executed after data transfer is completed
    #pragma offload_transfer target(mic:0) in(a,b,c)
    
    // Example 2:
    // Initiate asynchronous data transfer CPU -> MIC
    #pragma offload_transfer target(mic:0) in(a,b,c) signal(&a)

    The offload_wait pragma is also a stand-alone pragma which does not require a succeeding statement. This pragma contains a target clause and a wait clause, which cause the pragma to start execution only after the asynchronous activity associated with the tag has completed.

    // Example 3:
    // Wait for activity signaled by &p to be completed. Variable p is the tag.
    #pragma offload_wait target(mic:0) wait(&p)

    Memory Management

    The offload_transfer pragma can be used for memory allocation and deallocation by avoiding the data transfer with the use of the nocopy clause. This is typically done outside of a loop to amortize cost of allocation.

    // Example 4:
    #define ALLOC alloc_if(1) free_if(0)
    #define FREE alloc_if(0) free_if(1)
    #define REUSE alloc_if(0) free_if(0)
    // Allocate memory on the coprocessor  (without also transferring data)
    #pragma offload_transfer target(mic:0) nocopy(p,q : length(l) ALLOC)
    …
    for (…)
    {
      // Use of allocated memory on the coprocessor for offloads
      #pragma offload target(mic:0) in(p:length(l) REUSE) out(q:length(l) REUSE)
      {
        // computation using p and q
        ...
      }
    }
    …
    // Free memory on the coprocessor (without also transferring data)
    #pragma offload_transfer target(mic:0) nocopy(p,q : length(l) FREE)

    Send Input Data Asynchronously

    The most typical usage begins with initiating the data transfer, execute some CPU activity, then start the offload computation that will use the transferred data. The data is placed in the same variables listed in the transfer initiation. Those variables must be accessible by the time the offload pragma begins execution.

    // Example 5:
    // Initiate asynchronous data transfer MIC -> CPU
    #pragma offload_transfer target(mic:0) in(p,q,r) signal(&p)
    …
    …
    // Do the offload only after data has arrived
    #pragma offload target(mic:0) wait(&p)
    {
      // offload computation… = p;
    }

    Receive Output Asynchronously

    In asynchronous offload, an offload computation produces results that will be transferred back to the host at a later time. The offload pragma finishes the work but does not immediately copy the data back. Instead, an asynchronous offload_transfer initiates the copy. Later, when results are needed, an offload_wait is used to retrieve the data.

    // Example 6a:
    // Perform  the offload computation but don’t copy back results immediately
    #pragma offload target(mic:0) nocopy(p)
    {
      p = …;
    }
    // Initiate asynchronous data transfer MIC -> CPU
    #pragma offload_transfer target(mic:0) out(p) signal(&p)
    …
    …
    // Wait for data to arrive
    #pragma offload_wait target(mic:0) wait(&p)

    Asynchronous Computation

    The host initiates an offload to be performed asynchronously and can proceed to next statement after starting this computation. Later in the code, an offload_wait pragma is used to wait for completion of the offload activity.

    // Example 6b:
    char signal_var;
    int *p;
    do {
    // Initiate asynchronous computation
    #pragma offload … in( p:length(1000) ) signal(&signal_var)
    {
    mic_compute();
    }
    concurrent_cpu_activity();
    #pragma offload_wait (&signal_var);
    } while (1);

    Testing Signals

    Some scenarios require testing to determine whether the computation signaled with a given tag is finished. Use the _Offload_signaled function (non-blocking mechanism) to check if an offload has completed.

    // Example 7:
    // Initiate asynchronous computation
    int c;
    #pragma offload target(mic:mic_no) signal(&c) ...
    {
       S3;
    }
    ...
    // Test if computation has been completed for tag “c”
    if _Offload_signaled(mic_no, &c) ….

    Double-buffering

    Use the offload, offload_transfer and offload_wait pragmas to implement a double-buffering algorithm. The example below shows memory allocation on the target device, asynchronous data transfers, the use of signal clauses to control asynchronous offloads.

    // Example 8: Double-buffering Input
    void do_async_in()
    {
      int i;
      #pragma offload_transfer target(mic:0) in(in1 : length(count) REUSE) signal(in1)
      for (i=0; i<iter; i++)
      {
        if (i%2 == 0)
        {
          #pragma offload_transfer target(mic:0) if(i!=iter-1) \
            in(in2 : length(count) REUSE) signal(in2)
    
          #pragma offload target(mic:0) nocopy(in1) wait(in1) \
            out(out1 : length(count) REUSE)
             compute(in1, out1);
        } else {
          #pragma offload_transfer target(mic:0) if(i!=iter-1) \
            in(in1 : length(count) REUSE ) signal(in1)
    
          #pragma offload target(mic:0) nocopy(in2) wait(in2) \
            out(out2 : length(count) REUSE)
              compute(in2, out2);
        }
      }
    }
    
    // Example 8: Double-buffering Output
    void do_async_out()
    {
      int i;
      for (i=0; i<iter+1; i++)
      {
        if (i%2 == 0) {
          if (i<iter) {
            #pragma offload target(mic:0) in(in1 : length(count) REUSE) nocopy(out1)
              compute(in1, out1);
            #pragma offload_transfer target(mic:0) out(out1:length(count) REUSE) signal(out1)
          }
          if (i>0) {
            #pragma offload_wait target(mic:0) wait(out2)
              use_result(out2);
          }
        } else {
          if (i<iter) {
            #pragma offload target(mic:0) in(in2 : length(count) REUSE) nocopy(out2)
              compute(in2, out2);
            #pragma offload_transfer target(mic:0) out(out2:length(count) REUSE)) signal(out2)
          }
          if (i>0) {
            #pragma offload_wait target(mic:0) wait(out1)
              use_result(out1);
          }
        }
      }
    }

    Summary

    Asynchronous offload allows data transfer and computation to overlap. This method does not require the use of additional threads on the host and is useful for pipelined operations. Refer to the following sample code installed with the Intel® C++ Compiler for more details (default installation directory):

    • Linux*:  /opt/intel/composer_xe_2015/Samples/en_US/C++/mic_samples/intro_sampleC 
    • Windows*:  C:\Program Files (x86)\Intel\Composer XE 2015\Samples\en_US\C++
  • Sviluppatori
  • Linux*
  • C/C++
  • Intermedio
  • Compilatore C++ Intel®
  • Architettura Intel® Many Integrated Core
  • Ottimizzazione
  • Vettorizzazione
  • URL
  • Esempio di codice
  • Argomenti sui compilatori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone

    Asynchronous Offload - Fortran Code Examples

    $
    0
    0

    This document provides information about asynchronous data transfer, asynchronous computation and memory management without data transfer. This document includes code examples of common usage scenarios. The examples in this article are in Fortran only.

    Introduction

    Two different Fortran directives are used for data transfer and wait for completion.
    The directive for data transfer only, with asynchronous option is:

    !dir$ offload_transfer <clauses> [ signal(<tag>) ]

    The directive to wait for completion of asynchronous activity is

    !dir$ offload_wait <clauses> wait(<tag>)

    The offload directive also takes optional signal and wait clauses

    !dir$ offload <clauses> [ signal(<tag>) ] [ wait(<tag>) ]<statement>

    The offload_transfer and offload_wait directives are stand-alone and do not apply to the subsequent code block.

    Data Transfer

    The offload_transfer directive is a stand-alone directive, meaning that no statement succeeds it. This directive contains a target clause and either all in clauses, or all out clauses. Without a signal clause, offload_transfer initiates and completes a synchronous data transfer. With a signal clause, initiates the data transfer only. The offload_transfer directive can also take a wait clause. A later directive with wait clause is used to wait for data transfer completion.
    Expressions in the signal and wait clauses are address-sized values that serve as tags on the asynchronous operation.

    ! Example 1
    ! Synchronous data transfer CPU -> MIC
    ! Next statement executed after data transfer is completed
    !dir$ offload_transfer target(mic:0) in(a,b,c)
    
    ! Example 2
    ! Initiate asynchronous data transfer CPU -> MIC
    !dir$ offload_transfer target(mic:0) in(a,b,c) signal(s)

    The offload_wait directive is also a stand-alone directive which does not require a succeeding statement. This directive contains a target clause and a wait clause, which cause the directive to start execution only after the asynchronous activity associated with the tag has completed.

    ! Example 3
    ! Wait for activity signaled by &p to be completed. Variable p is the tag.
    !dir$ offload_wait target(mic:0) wait(s)

    Memory Management

    The offload_transfer directive can be used for memory allocation and deallocation by avoiding the data transfer with the use of the nocopy clause. This is typically done outside of a loop to amortize cost of allocation.

    ! Example 4
    #define ALLOC alloc_if(.TRUE.) free_if(.FALSE.)
    #define FREE alloc_if(.FALSE.) free_if(.TRUE.)
    #define REUSE alloc_if(.FALSE.) free_if(.FALSE.)
    ! Allocate memory on the coprocessor  (without also transferring data)
    !dir$ offload_transfer target(mic:0) nocopy(p,q: ALLOC)
    do …
        ! Use of allocated memory on the coprocessor for offloads
        !dir$ offload target(mic:0) in(p: REUSE) out(q: REUSE)
        ! computation using p and q
    enddo
    …
    ! Free memory on the coprocessor (without also transferring data)
    !dir$ offload_transfer target(mic:0) nocopy(p,q: FREE)

    Send Input Data Asynchronously

    The most typical usage initiates the data transfer, executes some CPU activity, then starts the offload computation that will use the transferred data. The data is placed in the same variables listed in the transfer initiation. Those variables must be accessible by the time the offload directive begins execution.

    ! Example 5
    ! Initiate asynchronous data transfer MIC -> CPU
    !dir$ offload_transfer target(mic:0) in(p,q,r) signal(s)
    …
    …
    ! Do the offload only after data has arrived
    !dir$ offload target(mic:0) wait(s)
    ! offload computation
    … = p

    Receive Output Asynchronously

    In asynchronous offload, an offload computation produces results that will be transferred back to the host at a later time. The offload directive finishes the work but does not immediately copy the data back. Instead, an asynchronous offload_transfer initiates the copy. Later, when results are needed, an offload_wait is used to retrieve the data.

    ! Example 6a
    ! Perform  the offload computation but don’t copy back results immediately
    !dir$ offload target(mic:0) nocopy(p)
    ! offload computation
    …
    ! Initiate asynchronous data transfer MIC -> CPU
    !dir$ offload_transfer target(mic:0) out(p) signal(s)
    …
    …
    ! Wait for data to arrive
    !dir$ offload_wait target(mic:0) wait(s)

    Asynchronous Computation

    The host initiates an offload to be performed asynchronously and can proceed to next statement after starting this computation. Later in the code, an offload_wait directive is used to wait for completion of the offload activity.

    ! Example 6b
    character :: signal_var
    integer, allocatable, dimension :: p
    do …
    ! Initiate asynchronous computation
    !dir$ offload … in(p) signal(signal_var)
        call mic_compute();
    call concurrent_cpu_activity();
    !dir$ offload_wait (signal_var);
    enddo

    Testing Signals

    Some scenarios require testing to determine whether the computation signaled with a given tag is finished. Use the Offload_signaled function (non-blocking mechanism) to check if an offload has completed.

    ! Example 7
    ! Initiate asynchronous computation
    program prog
    use mic_lib
    implicit none
    
    integer :: c
    !dir$ offload target(mic:mic_no) signal(c)
        ! offload computation statement
        ...
    ! Test if computation has been completed
    if (Offload_signaled(mic_no, c) /= 0) then
        …
    endif

    Double-buffering

    Use the offload, offload_transfer and offload_wait directives to implement a double-buffering algorithm. The example below shows memory allocation on the target device, asynchronous data transfers, the use of signal clauses to control asynchronous offloads.

    ! Example 8 - Double-buffering Input
    subroutine do_async_in()
        integer :: i
        !dir$ offload_transfer target(mic:0) in(in1: REUSE) signal(sig1)
        do i=1, iter
            if (MOD(i, 2) == 1) then
                ! Odd numbered iterations
                !dir$ offload_transfer target(mic:0) if(i /= iter) in(in2: REUSE) signal(sig2)
                !dir$ offload target(mic:0) nocopy(in1) wait(sig1)  out(out1: REUSE)
                call compute(in1, out1);
            else
                !dir$ offload_transfer target(mic:0) if(I /= iter) in(in1: REUSE ) signal(sig1)
                !dir$ offload target(mic:0) nocopy(in2) wait(sig2) out(out2: REUSE)
                call compute(in2, out2);
            endif
        enddo
    end subroutine
    
    ! Example 8 - Double-buffering Output
    subroutine do_async_out()
        integer :: i
        do i=1, iter
            if(MOD(i, 2) ==1) then ! Odd numbered iterations
                if (i<iter) then ! all iterations except the last
                    !dir$ offload target(mic:0) in(in1: REUSE) nocopy(out1)
                        call compute(in1, out1)
                    !dir$ offload_transfer target(mic:0) out(out1: REUSE) signal(sig1)
                 end if
                 if (i>1) then ! all iterations except the first
                     !dir offload_wait target(mic:0) wait(sig2)
                     call use_result(out2)
                 endif
             else ! even numbered iterations
                 if(i < iter) then ! all iterations except the last
                     !dir$ offload target(mic:0) in(in2:REUSE) nocopy(sig2)
                     call compute(in2, out2)
                     !dir$ offload_transfer target(mic:0) out(out2:REUSE) signal(sig2)
                 endif
                if(i > 1) then ! all iterations except the first
                    !dir$ offload_wait target(mic:0) wait(sig1)
                    call use_result(out1)
                endif
            endif
        enddo
    end subroutine

    Summary

    Asynchronous offload allows data transfer and computation to overlap. This method does not require the use of additional threads on the host and is useful for pipelined operations. Refer to the following sample code installed with the Intel® Fortran Compiler for more details (default installation directory):

    • Linux*:  /opt/intel/composer_xe_2015/Samples/en_US/Fortran/mic_samples/LEO_Fortran_intro
    • Windows*:  C:\Program Files (x86)\Intel\Composer XE 2015\Samples\en_US\Fortran
  • Sviluppatori
  • Linux*
  • Fortran
  • Avanzato
  • Intermedio
  • Compilatore Fortran Intel®
  • Architettura Intel® Many Integrated Core
  • Ottimizzazione
  • Vettorizzazione
  • URL
  • Esempio di codice
  • Argomenti sui compilatori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone

    Управление режимами вычислений с плавающей запятой при использовании Intel® Threading Building Blocks

    $
    0
    0

    В Intel® Threading Building Blocks (Intel® TBB) 4.2, обновление 4, появилась расширенная поддержка управления параметрами вычислений с плавающей запятой. Теперь эти параметры можно указать при вызове большинства параллельных алгоритмов (включая flow::graph). В этом блоге мне бы хотелось рассказать о некоторых особенностях, новых функциях и общей поддержке вычислений с плавающей запятой в Intel TBB. Этот блог не посвящен общей поддержке вычислений с плавающей запятой в ЦП. Если вы незнакомы с поддержкой вычислений с плавающей запятой в ЦП, рекомендую начать с раздела Операции с плавающей запятойв справочном руководстве Intel® C++ Compiler. Для получения дополнительных сведений о сложностях арифметики с плавающей запятой рекомендую классику “Все, что необходимо знать об арифметике с плавающей запятой”.

    В Intel TBB предлагается два способа, при помощи которых можно задать нужные параметры вычислений с плавающей запятой для задач, выполняемых планировщиком задач Intel TBB:

    1. Когда планировщик задач инициализируется для заданного потока приложения, он получает текущие параметры вычислений с плавающей запятой этого потока;
    2. У класса task_group_context есть метод для получения текущих параметров вычислений с плавающей запятой.

    Рассмотрим первый подход. Такой подход, по сути, является неявным: планировщик задач всегда и безусловно получает параметры вычислений с плавающей запятой в момент своей инициализации. Затем сохраненные параметры используются для всех задач, связанных с этим планировщиком задач. Другими словами, такой подход можно рассматривать как свойство планировщика задач. Поскольку это свойство планировщика задач, мы можем применять и управлять параметрами вычислений с плавающей запятой в нашем приложении двумя способами:

    1. Планировщик задач создается для каждого потока, поэтому мы можем запустить новый поток, задать нужные параметры, а затем инициализировать для этого потока новый планировщик задач (явно или неявно), который получит параметры вычислений с плавающей запятой;
    2. Если поток уничтожает планировщик задач и инициализирует новый, можно будет получить новые параметры. Можно указать новые параметры вычислений с плавающей запятой перед повторным созданием планировщика. При создании нового планировщик задач параметры будут применены для всех задач.

    Я попробую показать некоторые особенности в следующих примерах:

    Обозначения:

    • “fp0”, “fp1” and “fpx” – состояния, описывающие параметры вычислений с плавающей запятой;
    • “set_fp_settings( fp0 )” и “set_fp_settings( fp1 )” – указание параметров вычислений с плавающей запятой для текущего потока;
    • “get_fp_settings( fpx )” – получение параметров вычислений с плавающей запятой из текущего потока и сохранение этих параметров в “fpx”.

    Пример #1. Планировщик задач по умолчанию.

    // Suppose fp0 is used here.
    // Every Intel TBB algorithm creates a default task scheduler which also captures floating-point
    // settings when initialized.
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp0 will be used for all iterations on any Intel TBB worker thread.
    } );
    // There is no way anymore to destroy the task scheduler on this thread.

    Пример #2. Настраиваемый планировщик задач.

    // Suppose fp0 is used here.
    tbb::task_scheduler_init tbb_scope;
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp0 will be used for all iterations on any Intel TBB worker thread.
    } );

    В целом, пример 2 действует так же, как пример 1, но зато предоставляет способ вручную завершить работу планировщика задач.

    Пример #3. Повторная инициализация планировщика задач.

    // Suppose fp0 is used here.
    {
        tbb::task_scheduler_init tbb_scope;
        tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
            // fp0 will be used for all iteration on any Intel TBB worker thread.
        } );
    } // the destructor calls task_scheduler_init::terminate() to destroy the task scheduler
    set_fp_settings( fp1 );
    {
        // A new task scheduler will capture fp1.
        tbb::task_scheduler_init tbb_scope;
        tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
            // fp1 will be used for all iterations on any Intel TBB worker
            // thread.
        } );
    }

    Пример #4. Еще один поток.

    void thread_func();
    int main() {
        // Suppose fp0 is used here.
        std::thread thr( thread_func );
        // A default task scheduler will capture fp0
        tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
            // fp0 will be used for all iterations on any Intel TBB worker thread.
        }
        thr.join();
    }
    void thread_func() {
        set_fp_settings( fp1 );
        // Since it is another thread, Intel TBB will create another default task scheduler which will
        // capture fp1 here. The new task scheduler will not affect floating-point settings captured by
        // the task scheduler created on the main thread.
        tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
            // fp1 will be used for all iterations on any Intel TBB worker thread.
        }
    }

    Обратите внимание, что Intel TBB может повторно использовать одни и те же рабочие потоки для обоих parallel_for, несмотря на то, что они вызваны из разных потоков. При этом гарантируется, что все итерации parallel_for в главном потоке будут использовать fp0, а все итерации второго parallel_for — fp1.

    Пример #5. Изменение параметров вычислений с плавающей запятой в потоке пользователя.

    // Suppose fp0 is used here.
    // A default task scheduler will capture fp0.
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp0 will be used for all iterations on any Intel TBB worker thread.
    } );
    set_fp_settings( fp1 );
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp0 will be used despite the floating-point settings are changed before Intel TBB parallel
        // algorithm invocation since the task scheduler has already captured fp0 and these settings
        // will be applied to all Intel TBB tasks.
    } );
    // fp1 is guaranteed here.

    Второй parallel_for оставит fp1 неизменным в пользовательском потоке (несмотря на то, что для всех итераций используется fp0), поскольку в Intel TBB гарантируется отсутствие изменений параметров вызывающего потока вызовом любого параллельного алгоритма Intel TBB, даже если алгоритм выполняется с другими параметрами.

    Пример #6. Изменение параметров вычислений с плавающей запятой в задаче Intel TBB.

    // Suppose fp0 is used here.
    // A default task scheduler will capture fp0
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        set_fp_settings( fp1 );
        // Setting fp1 inside the task will lead undefined behavior. There are no guarantees about
        // floating-point settings for any following tasks of this parallel_for and other algorithms.
    } );
    // No guarantees about floating-point settings here and following algorithms.

    Если вам очень нужно использовать внутри задачи другие параметров вычислений с плавающей запятой, следует записать предыдущие параметры, а в конце задачи восстановить их:

    // Suppose fp0 is used here.
    // A default task scheduler will capture fp0
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        get_fp_settings( fpx );
        set_fp_settings( fp1 );
        // ... some calculations.
        // Restore captured floating-point settings before the end of the task.
        set_fp_settings( fpx );
    }
    // fp0 is guaranteed here.

    Для решения большей части проблем можно использовать подход с применением планировщика задач для управления параметров вычислений с плавающей запятой. Но представьте себе ситуацию, когда для каждой из двух частей вычисления требуются разные параметры. Разумеется, можно использовать подходы, показанные в примерах 3 и 4. Но при этом могут возникнуть определенные затруднения:

    1. Реализация затруднена: в примере 3 невозможно управлять жизненным циклом объекта планировщика задач, а в примере 4 может потребоваться синхронизация между потоками;
    2. Влияние на производительность: в примере 3 нужно заново инициализировать планировщик задач, тогда как ранее этого не требовалось, а в примере 4 может возникнуть проблема избыточной подписки.

    Что происходит с вложенными вычислениями при разных параметрах вычислений с плавающей запятой? Применение планировщика задач в этом случае затрудняется, поскольку потребуется писать много бесполезного кода.

    В Intel TBB 4.2 U4 появился новый подход на основе task_group_context: функциональность task_group_context была расширена для управления параметрами вычислений с плавающей запятой для задач, связанных с ним, с помощью нового метода

    void task_group_context::capture_fp_settings();

    получающего эти параметры из вызывающего потока и передающего их своим задачам. Таким образом, вы можете без труда задать нужные параметры для того или иного параллельного алгоритма:

    Пример #7. Задание параметров вычислений с плавающей запятой для определенного алгоритма.

    // Suppose fp0 is used here.
    // The task scheduler will capture fp0.
    task_scheduler_init tbb_scope;
    tbb::task_group_context ctx;
    set_fp_settings( fp1 );
    ctx.capture_fp_settings();
    set_fp_settings( fp0 );
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // In spite of the fact the task scheduler captured fp0 when initialized and the parallel
        // algorithm is called from thread with fp0, fp1 will be here for all iterations on any
        // Intel TBB worker  thread since task group context (with captured fp1) is specified for this
        // parallel algorithm.
    }, ctx );

    Пример 7 не особенно интересен, поскольку можно добиться такого же результата, если указать fp1 до инициализации планировщика задач. Рассмотрим гипотетическую проблему, когда для двух частей вычисления требуются разные параметры. Эту проблему можно решить так:

    Пример #8. Задание параметров вычислений с плавающей запятой для разных частей вычислений.

    // Suppose fp0 is used here.
    // The task scheduler will capture fp0.
    task_scheduler_init tbb_scope;
    tbb::task_group_context ctx;
    set_fp_settings( fp1 );
    ctx.capture_fp_settings();
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // In spite of the fact that floating-point settings are fp1 on the main thread, fp0 will be
        // here for all iterations on any Intel TBB worker thread since the task scheduler captured fp0
        // when initialized.
    } );
    // fp1 will be used here since TBB algorithms do not change floating-point settings which were set
    // before calling.
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp1 will be here since the task group context with captured fp1 is specified for this
        // parallel algorithm.
    }, ctx );
    // fp1 will be used here.

    Я уже продемонстрировал одно свойство подхода на основе контекста группы задач в примерах 7 и 8: заданные таким способом параметры имеют более высокий приоритет, чем параметры, заданные с помощью планировщика задач, когда контекст указывается для параллельного алгоритма Intel TBB. При этом подходе наследуется еще одно свойство: вложенные параллельные алгоритмы наследуют параметры вычислений с плавающей запятой из контекста группы задач, указанного для внешнего параллельного алгоритма.

    Пример #9. Вложенные параллельные алгоритмы.

    // Suppose fp0 is used.
    // The task scheduler will capture fp0.
    task_scheduler_init tbb_scope;
    tbb::task_group_context ctx;
    set_fp_settings( fp1 );
    ctx.capture_fp_settings();
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp1 will be here
        tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
            // Although the task group context is not specified for the nested parallel algorithm and
            // the task scheduler has captured fp0, fp1 will be here.
        }, ctx );
    } );
    // fp1 will be used here.

    Если нужно использовать планировщик задач внутри вложенного алгоритма, можно использовать контекст изолированной группы задач:

    Пример #10. Вложенный параллельный алгоритм с изолированным контекстом группы задач.

    // Suppose fp0 is used.
    // The task scheduler will capture fp0.
    task_scheduler_init tbb_scope;
    tbb::task_group_context ctx;
    set_fp_settings( fp1 );
    ctx.capture_fp_settings();
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp1 will be used here.
        tbb::task_group_context ctx2( tbb::task_group_context::isolated );
        tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
            // ctx2 is an isolated task group context so it will have fp0 inherited from the task
            // scheduler. That’s why fp0 will be used here.
        }, ctx2 );
    }, ctx );
    // fp1 will be used here.
    

    В одном блоге невозможно продемонстрировать все возможности поддержки вычислений с плавающей запятой в Intel TBB. Но приведенные примеры демонстрируют основные идеи управления параметрами таких вычислений в Intel TBB для применения в реальных приложениях.

    Основные принципы параметров вычислений с плавающей запятой можно объединить в следующий список:

    • Эти параметры можно задать для всех параллельных алгоритмов Intel TBB с помощью планировщика задач или для отдельных алгоритмов Intel TBB с помощью контекста группы задач;
    • Параметры, полученные контекстом группы задач, имеют приоритет над параметрами, полученными при инициализации планировщика задач;
    • По умолчанию все вложенные алгоритмы наследуют такие параметры с внешнего уровня, если не указан ни контекст группы задач с полученными параметрами, ни изолированный контекст группы задач;
    • При вызове параллельного алгоритма Intel TBB параметры вычислений с плавающей запятой вызывающего потока не изменяются, даже если алгоритм выполняется с другими параметрами;
    • Параметры, заданные после инициализации планировщика задач, недоступны для параллельных алгоритмов Intel TBB, если не используется подход с контекстом группы задач или не выполняется повторная инициализация планировщика задач;
    • Пользовательский код в задаче не должен менять эти параметры либо должен восстанавливать прежние параметры до окончания задачи.

    P.S. Отложенный планировщик задач получает параметры с плавающей запятой при вызове метода инициализации.

    Пример #11: Явная инициализация планировщика задач.

    set_fp_settings( fp0 );
    tbb::task_scheduler_init tbb_scope( task_scheduler_init::deferred );
    set_fp_settings( fp1 );
    init.initialize();
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // The task scheduler is declared when fp0 is set but it will capture fp1 since it is
        // initialized when fp1 is set.
    } );
    // fp1 will used be here.

    P.P.S. Будьте осторожны, если вы используете функцию автоматической записи в планировщике задач. Она не будет работать, если вызов вашей функции осуществляется внутри другого параллельного алгоритма Intel TBB.

    Пример #12. Еще одно предупреждение: берегитесь библиотечных функций.

    Фрагмент кода 1. Слегка измененный пример 1. Это работоспособный код, здесь нет ошибок.

    set_fp_settings( fp0 );
    // Run with the hope that Intel TBB parallel algorithm will create a default task scheduler which
    // will also capture floating-point settings when initialized.
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {...} );

    Фрагмент кода 2. Достаточно вызвать фрагмент кода 1 как библиотечную функцию.

    set_fp_settings( fp1 );
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        call “code snippet 1”;
    }
    // Possibly, you will want to have fp1 here but see the second bullet below.
    

    Этот пример выглядит вполне безобидным, поскольку фрагмент кода 1 задаст нужные параметры и будет выполнять вычисления с fp0. Но в этом примере есть две проблемы:

    1. К моменту вызова фрагмента кода 1 планировщик задач уже будет инициализирован и уже получит fp1. Поэтому фрагмент кода 1 будет выполнять вычисления с fp1, не учитывая параметры fp0;
    2. Изоляция пользовательских параметров вычислений с плавающей запятой нарушается, поскольку фрагмент кода 1 изменяет эти параметры внутри задачи Intel TBB, но не восстанавливает первоначальные параметры. Поэтому нет никаких гарантий относительно параметров вычислений с плавающей запятой после выполнения параллельного алгоритма Intel TBB во фрагменте кода 2.

    Фрагмент кода 3. Исправленное решение.

    Исправим фрагмент кода 1:

    // Capture the incoming fp settings.
    get_fp_settings( fpx );
    set_fp_settings( fp0 );
    tbb::task_group_context ctx;
    ctx.capture_fp_settings();
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> &r ) {
        // Here fp0 will be used for all iterations on any Intel TBB worker thread.
    }, ctx );
    // Restore fp settings captured before setting fp0.
    set_fp_settings( fpx );

    Фрагмент кода 2 остается неизменным.

    set_fp_settings( fp1 );
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> &r ) {
        call “fixed code snip 1”.
    } );
    // fp1 will be used here since the “fixed code snippet 1” does not change the floating-point
    // settings visible to “code snippet 2”.

     

  • tbb
  • floating-point
  • fp
  • FPU
  • floating-point settings
  • FP settings
  • FPU settings
  • FPU controls
  • CPU settings
  • CPU controls
  • Sviluppatori
  • Sviluppatori Intel AppUp®
  • Partner
  • Professori
  • Studenti
  • Android*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Android*
  • Server
  • Windows*
  • C/C++
  • Avanzato
  • Principiante
  • Intermedio
  • Intel® Threading Building Blocks
  • Strumenti di sviluppo
  • Istruzione
  • Open source
  • Elaborazione parallela
  • Threading
  • Laptop
  • Telefono
  • Server
  • Tablet
  • Desktop
  • URL
  • Esempio di codice
  • Per iniziare
  • Librerie
  • Sviluppo multithread
  • Area tema: 

    IDZone

    HTML5 Hybrid Apps with Admob* Cordova* plug-in

    $
    0
    0

    The source code for these samples can be found here: https://github.com/gomobile/sample-url-app,https://github.com/gomobile/sample-live-video-streams, https://github.com/gomobile/sample-audio-player or download the Intel® XDK to check out all of the HTML5 samples and templates.

    XDKsamplesPanel_9-30.png

    Introduction

    Intel XDK® is a HTML5 hybrid application development environment that allow users to develop, debug, test on-device and build projects for various mobile platforms. Along with development features, this development environment provides various HTML5 templates and samples intended for running on various mobile devices and platforms. For more information on getting started, go to https://software.intel.com/en-us/html5/xdkdocs.

    Purpose

    Amongst developing HTML5 hybrid applications for various mobile platforms using Intel XDK Build system with the intent to distribute as a paid app alone, these applications can also be integrated with ads such as AdMob* banner and interstitial ads which also generate revenues for the respective app developer. By leveraging the Apache Cordova* plug-ins, you can develop compelling HTML5 hybrid apps for any platform and use case. Apache Cordova is a set of device APIs that allow a mobile app developer to access native device function such as the camera or accelerometer from JavaScript. Besides the standard APIs, various plug-ins are available in the Apache Cordova Plug-ins Registry and located across the web on github. For example, the com.google.admob Cordova plug-in provides the capability to initiate Banner and Interstitial ads as well as handle admob ad events.

    Design Considerations

    The URL/iFrame, Live Video Streams and Audio Player App are designed using Intel XDK App Designer tool under the Develop Tab. All of those sample applications also uses the com.google.admob Cordova plug-in  for displaying ads on screen. An Google/AdMob account is required to obtain an ad unit id for displaying banner or interstitial ads on the desired platform.

    Creating AdMob Ad Unit ID

    1. Login to https://apps.admob.com
    2. Navigate to the Monetize panel
    3. Click the Monetize new app button
    4. Input your App name, and desired mobile Platform
    5. Select ad format and name ad unit
    6. Review setup instructions

    Intel XDK url app screenshot_9-30

    The URL/iFrame App is a simple application that displays within your application using an iframe. iFrames within mobile apps are widely used in various mobile app stores for directing traffic to mobile optimized sites as well as utilized the already in place infrastructures.

    Intel XDK Live Video Streams screenshot_9-30

    The Live Video Streams app is a simple multi-page application that uses Intel App Framework UI JavaScript Library and iframes to display online video streams of live content from various source.

    Intel XDK audio player app screenshot_9-30

    The Audio Player App demonstrates how to play audio files local to the HTML5 project, online podcast, online .m3u playlist and files on device.

    Note: Intel XDK only supports playing *.wav files in the Emulator under the Emulate tab.

    Development /Testing

    Due to the limited mobile platforms supported by the com.google.admob Cordova plug-in, these sample applications have been tested on iOS and Android devices.

    Google Admob Cordova Plug-in Code Snippet

    /*
        Function: onDocLoad()
        Parameter: none
        Description: show the Banner Ad [initBanner(...) then showBanner(...)] or interstitial Ad [initInterstitial(...) then cacheInterstitial() then showInterstitial()];
    */
    function onDocLoad() {
        //show Banner ad
        //admobAd.initBanner(admob_banner_key, admobAd.AD_SIZE.BANNER.width, admobAd.AD_SIZE.BANNER.height);//create admob banner
        //admobAd.showBanner(admobAd.AD_POSITION.BOTTOM_CENTER);
    
        //show Interstitial ad
        admobAd.initInterstitial(admob_interstitial_key);//create Interstitial ad
        //admobAd.cacheInterstitial();// load admob Interstitial
        document.addEventListener(admobAd.AdEvent.onInterstitialReceive, onInterstitialReceive, false);
        document.addEventListener(admobAd.AdEvent.onInterstitialFailedReceive, onReceiveFail, false);
    
        admobAd.cacheInterstitial();// load admob Interstitial
    }

    Note: You can download the entire source for the plugin at http://sourceforge.net/projects/phonegap-admob/.

    Touch Notifier Node.js IoT & HTML5 Companion App

    $
    0
    0

    The source code for these templates can be found here: https://github.com/gomobile/iotapp-touch-notifier or download the Intel® XDK IoT Edition to check out all the node.js IoT application templates and samples plus HTML5 companion apps.

    IoT Projects Panel September 2014

    Introduction

    IoT Touch and Buzzer Setup September 2014

    Intel XDK® IoT Edition is a HTML5 hybrid and node.js application development environment that allow users to deploy, run, debug on various IoT platforms such as the Intel® Galileo and Edison board running the IoT Development Kit Linux Image and utilizes the Grover Starter Kit Plus – IoT Intel® Edition. With the starter kit and Linux* image installed, your development platform is ready to connect to XDK IoT Editon and run your node.js applications. Along with development features, this development environment provides various node.js templates and samples intended for running on Intel IoT platforms. For more information on getting started, go to https://software.intel.com/en-us/html5/documentation/getting-started-with-intel-xdk-iot-edition.

    Purpose

    IoT Projects Panel September 2014

    The Touch Notifier Node.js sample application distributed within Intel® XDK IoT Edition under the IoT with Node.js Projects project creation option showcases how to read digital data from a Grover Starter Kit Plus – IoT Intel® Edition Touch Sensor, start a web server and communicate wirelessly using WebSockets. Along with the use of a touch sensor, a buzzer is used to alert the user engaging with the touch sensor that their invasive actions are received. In order to communicate with sensors, this sample uses the MRAA Sensor Communication Library. The intent of this library is to make it easier for developers and sensor manufacturers to map their sensors & actuators on top of supported hardware and to allow control of low level communication protocol by high level languages & constructs.

    Touch Notifier HTML5 Projects Panel September 2014

    The Touch Sensor sample HTML5 companion application is also distributed under the Work with a Demo project creation option. This application communicates with the node.js IoT application using WebSockets API and notifies the mobile user that the Touch Sensor is being touched.

    Design Considerations

    This sample requires the mraa library and xdk daemon to be installed on your board. These two requirements are included in the IoT Development Kit Linux Image which enables communication between your board and XDK IoT Edition plus access to the IO pins. An active connection to your local network using an Ethernet cable or Wireless card such as the Intel 135N PCI Express WiFi/Bluetooth card is required for the WebSockets communication between the board and mobile device. The node.js application reads data from the touch sensor periodically then sends it to the connected (mobile) clients using WebSockets. 

    [image of node.js application source & development board with sensors connected]

    Node.js IoT application

    Required npm packages:

    • express
    • socket.io

    These packages are listed in the package.json file to be built and installed on the connected development board. This is done by clicking the Install/Build button in the bottom toolbar. After installing the node modules, uploading (Click Upload button) and running (Click Run button) this project using Intel XDK IoT Edition will allow for evaluating the communication between the board and mobile application.

    Note: You will need to wait for the upload process to complete before running your project.

    Web Server

    //Create Web Server
    var app = require('express')();
    var http = require('http').Server(app);
    app.get('/', function (req, res) {'use strict';
        res.send('

    Hello world from Intel Galileo

    '); }); http.listen(1337, function () {'use strict'; console.log('listening on *:1337'); });

    Digital Read

    //Touch Sensor connected to D2 pin
    var digital_pin_D2 = new mraa.Gpio(2);
    digital_pin_D2.dir(mraa.DIR_IN);

    Digital Write

    //Buzzer connected to D6
    var digital_pin_D6 = new mraa.Gpio(6);
    digital_pin_D6.dir(mraa.DIR_OUT);
    //Write 0(OFF) to Buzzer connected to Digital IO Pin 6
    digital_pin_D6.write(0);

    WebSockets

    //Create Socket.io object with http server
    var io = require('socket.io')(http);
    //Attach a 'connection' event handler to the server
    io.on('connection', function (socket) {'use strict';
        console.log('a user connected');
        //Emits an event along with a message
        socket.emit('connected', 'Welcome');
    
        //Start watching Sensors connected to Galileo board
        startSensorWatch(socket);
    
        //Attach a 'disconnect' event handler to the socket
        socket.on('disconnect', function () {
            console.log('user disconnected');
        });
    });

    touchnotifierHTML5InputAddr.png September 2014      touchnotifierHTML5InActive.png September 2014      touchnotifierHTML5Active.png September 2014

    The HTML5 mobile application requires for that the IP address of your board be inputted to connect to the necessary WebSockets connection for notifying mobile users of Touch sensor activity which is collected by your development board.

    HTML5 Companion application

    WebSockets

    try {
            //Connect to Server
            socket = io.connect("http://" + ip_addr + ":" + port);
    
            //Attach a 'connected' event handler to the socket
            socket.on("connected", function (message) {
                //Prompt user with Cordova notification alert - Include under Intel XDK Projects Panel
                navigator.notification.alert('Welcome',  // message"",                     // callback'Hi There!',            // title'Ok'                  // buttonName
                );
            });
    
            //Set all Back button to not show
            $.ui.showBackButton = false;
            //Load page with transition
            $.ui.loadContent("#main", false, false, "fade");
    
            socket.on("message", function (message) {
                //alert("Is anyone there? "+message);
                if (message === "present") {
                    $("#notifier_circle").attr("class", "green");
                    //Update log
                    $("#feedback_log").append(Date().substr(0, 21) + " Someone is Present!
    "); //Prompt user with Cordova notification alert navigator.notification.alert('Someone is Present!', // message"", // callback'Check Your Door', // title'Ok' // buttonName ); //Wait 2 seconds then turn back to gray setTimeout(function () { $("#notifier_circle").attr("class", "gray"); }, 3000); } }); } catch (e) { navigator.notification.alert("Server Not Available!", // message"", // callback'Connection Error!', // title'Ok' // buttonName ); }

    Development /Testing

    Each of the templates have been test on Intel® Galileo Generation 1 and 2 boards as well as the Intel® Edison board.

    Local Temperature Node.js IoT & HTML5 Companion App

    $
    0
    0

    The source code for these templates can be found here: https://github.com/gomobile/iotapp-local-temperature or download the Intel® XDK IoT Edition to check out all the node.js IoT application templates and samples plus HTML5 companion apps.

    IoT Projects Panel September 2014

    Introduction

    Temperature Sensor Galileo Setup September 2014

    Intel XDK® IoT Edition is a HTML5 hybrid and node.js application development environment that allow users to deploy, run, debug on various IoT platforms such as the Intel® Galileo and Edison board running the IoT Development Kit Linux Image and utilizes the Grover Starter Kit Plus – IoT Intel® Edition. With the starter kit and Linux* image installed, your development platform is ready to connect to XDK IoT Editon and run your node.js applications. Along with development features, this development environment provides various node.js templates and samples intended for running on Intel IoT platforms. For more information on getting started, go to https://software.intel.com/en-us/html5/documentation/getting-started-with-intel-xdk-iot-edition.

    Purpose

    Local Temperature IoT Project Panel September 2014

    The Local Temperature Node.js sample application distributed within Intel® XDK IoT Edition under the IoT with Node.js Projects project creation option showcases how to read analog data from a Grover Starter Kit Plus – IoT Intel® Edition Temperature Sensor, start a web server and communicate wirelessly using WebSockets. In order to communicate with sensors, this sample uses the MRAA Sensor Communication Library. The intent of this library is to make it easier for developers and sensor manufacturers to map their sensors & actuators on top of supported hardware and to allow control of low level communication protocol by high level languages & constructs.

    Local Temperature HTML5 Companion Project Panel September 2014

    The Local Temperature sample HTML5 companion application is also distributed under the Work with a Demo project creation option. This application communicates with node.js IoT application using WebSockets API and visualizes the received temperature data in a scatter plot using the D3 JavaScript Library.

    Design Considerations

    This sample requires the mraa library and xdk daemon to be installed on your board. These two requirements are included in the IoT Development Kit Linux Image which enables communication between your board and XDK IoT Edition plus access to the IO pins. An active connection to your local network using an Ethernet cable or Wireless card such as the Intel 135N PCI Express WiFi/Bluetooth card is required for the WebSockets communication between the board and mobile device. The node.js application reads data from the temperature sensor then converts the value to degrees Fahrenheit. The collected temperature data is sent to the connected (mobile) clients using WebSockets. 

    [image of node.js application source & development board with sensors connected]

    Node.js IoT application

    Required npm packages:

    • express
    • socket.io

    These packages are listed in the package.json file to be built and installed on the connected development board. This is done by clicking the Install/Build button in the bottom toolbar. After installing the node modules, uploading (Click Upload button) and running (Click Run button) this project using Intel XDK IoT Edition will allow for evaluating the communication between the board and mobile application.

    Note: You will need to wait for the upload process to complete before running your project.

    Web Server

    //Create Web Server
    var app = require('express')();
    var http = require('http').Server(app);
    app.get('/', function (req, res) {'use strict';
        res.send('

    Hello world from Intel Galileo

    '); }); http.listen(1337, function () {'use strict'; console.log('listening on *:1337'); });

    Analog Read

    var mraa = require("mraa");
    //GROVE Kit A0 --> Aio(0)
    var myAnalogPin = new mraa.Aio(0);
    //Shift Temperature Sensor Value bitwise to the right
    var a = myAnalogPin.read() >> 2;

    WebSockets

    //Create Socket.io object with http server
    var io = require('socket.io')(http);
    //Attach a 'connection' event handler to the server
    io.on('connection', function (socket) {'use strict';
        console.log('a user connected');
        //Emits an event along with a message
        socket.emit('connected', 'Welcome');
    
        //Start watching Sensors connected to Galileo board
        startSensorWatch(socket);
    
        //Attach a 'disconnect' event handler to the socket
        socket.on('disconnect', function () {
            console.log('user disconnected');
        });
    });

    localTempHTML5InputAddr.png September 2014      localTempHTML5Active.png September 2014

    The HTML5 mobile application requires for that the IP address of your board be inputted to connect to the necessary WebSockets connection for visualizing the temperature data on your mobile device which was collected by your development board.

    HTML5 Companion application

    WebSockets

        try {
            //Connect to Server
            socket = io.connect("http://" + ip_addr + ":" + port);
    
            //Attach a 'connected' event handler to the socket
            socket.on("connected", function (message) {
                //Apache Cordova Notification - Include under the Projects Panel
                navigator.notification.alert("Great Job!",  // message"",                     // callback'You are Connected!',            // title'Ok'                  // buttonName
                );
    
                //Set all Back button to not show
                $.ui.showBackButton = false;
                //Load page with transition
                $.ui.loadContent("#main", false, false, "fade");
            });
    
            socket.on("message", function (message) {
                chart_data.push(message);
                plot();
                //Update log
                $("#feedback_log").text("Last Updated at " + Date().substr(0, 21));
            });
        } catch (e) {
            navigator.notification.alert("Server Not Available!",  // message"",                     // callback'Connection Error!',            // title'Ok'                  // buttonName
            );
        }

    D3.js Scatter plot

    //Scatter plot-Selects the specified DOM element for appending the svg
    var chart_svg = d3.select("#chart").append("svg").attr("id", "container1").attr("width", window.innerWidth).attr("height", 0.6 * height).append("g");
    
    chart_svg.attr("transform", "translate(25," + margin.top + ")");
    
    var x1 = d3.scale.linear().domain([0, 5000]).range([0, 100000]);
    
    var y1 = d3.scale.linear().domain([0, 200]).range([0.5 * height, 0]);
    
    //Add X Axis grid lines
    chart_svg.selectAll("line.y1")
        .data(y1.ticks(10))
        .enter().append("line")
        .attr("class", "y")
        .attr("x1", 0)
        .attr("x2", 100000)
        .attr("y1", y1)
        .attr("y2", y1)
        .style("stroke", "rgba(8, 16, 115, 0.2)");
    
    //This is for the Scatterplot X axis label
    chart_svg.append("text").attr("fill", "red").attr("text-anchor", "end").attr("x", 0.5 * window.innerWidth).attr("y", 0.55 * height).text("Periods");
    
    var x1Axis = d3.svg.axis().scale(x1).orient("top").tickPadding(0).ticks(1000);
    var y1Axis = d3.svg.axis().scale(y1).orient("left").tickPadding(0);
    
    chart_svg.append("g").attr("class", "x axis").attr("transform", "translate(0," + y1.range()[0] + ")").call(x1Axis);
    
    chart_svg.append("g").attr("class", "y axis").call(y1Axis);
    
    var dottedline = d3.svg.line().x(function (d, i) {
        'use strict';
        return x1(i);
    }).y(function (d, i) {'use strict';
        return y1(d);
    });
    ......

    Development /Testing

    Each of the templates have been test on Intel® Galileo Generation 1 and 2 boards as well as the Intel® Edison board.

    How to model scalability using Intel® Advisor XE 2015

    $
    0
    0

    Introduction

    Intel® Advisor XE 2015 now includes some new capabilities for modeling the scalability of your application. This article steps through runtime modeling and data set size modeling, and outlines these new capabilities.

    Run your application using the Intel Advisor XE 2015 suitability analysis

    1. First add annotations and rebuild your application in Release mode.
    2. Next bring up the Intel Advisor XE 2015 Workflow by clicking Tools > Advisor XE 2015 > Open Advisor Workflow.
    3. Click the Collect Suitability Data button.

    The key observation from the scalability graph is that the application is not scaling well as designed. Intel Advisor XE reports a suggestion that your tasks are too fine-grained.

    Model runtime

    Intel Advisor XE has several techniques where you can model the runtime and see how the application will perform assuming you modify how the application is applying parallelism.

    Check all of the recommend changes.

    We can see clear benefits. Notice that in the case of 32 CPUs we go from less than a 1x improvement to over a 4x maximum gain from our parallel region.

    But our graph shows that we see the maximum gain when we have 16 CPUs; adding additional resources decreases the performance. There are several possible reasons that this can happen:

    • There is not enough work to keep all of the CPUs busy.

    • The work is not balanced on all of the CPUs.

    • The runtime overhead to keep track of additional threads is too high.

    • There is lock contention

    Model Data Set Size

    Intel Advisor XE 2105 now has a way for you to model how your parallel region will perform under different workloads. It allows you to change the size of your data set and modify how long each task iteration takes, thereby testing the scalability without making any code changes. This feature is particularly useful for CPU-bound workloads.

    Look at the Loops iteration (Task) Modeling area of the Intel Advisor XE.

    Using the two sliders you can increase or decrease the number of tasks, that is, the size of your workload. You can also increase or decrease how long each task takes. Testing your design under different workloads is critical to understanding if you are using the correct design for the amount of work your problem is assigned. Intel Advisor XE lets you increase of decrease the number of tasks in multiples of 5, so 5x, 25x and 125x. You also increase or decrease the duration of tasks in a similar manner.

    In our example let’s multiply both the number of tasks and the duration of each task by 25.

    You can see below that the algorithm performs differently when run under this new workload.

    Not only does the new algorithm scale as we increase the CPU count past 16, we also increase our maximum site gain to 32x from the previous 4x gain for 32 CPUs. In this example the parallel region does not have enough work and each of the tasks is too fine-grained to work efficiently. We have shown that we can achieve good scalability by increasing the data set size and also increasing the amount of work each task performs.

    Summary

    Intel Advisor XE 2015 is a powerful tool for modeling the scalability of your application. By using runtime modeling and the new features to dynamically change the size of your data set you can see how to tune your algorithm without making any code changes.

     


     
  • Intermedio
  • URL
  • Miglioramento delle prestazioni
  • Sviluppo multithread
  • Area tema: 

    IDZone

    How to use the Intel® Energy Profiler in Intel® System Studio 2015

    $
    0
    0

    Introduction

    Intel® System Studio 2015 now contains an energy and power profiler called Intel® Energy Profiler. Using the Intel Energy Profiler allows you to collect sleep state, frequency and temperature data that lets you find the software that is causing unwanted power use. This article provides an overview of the Intel® Energy Profiler.

    Background

    Two key indicators that effect how much power your system is using are CPU Sleep states and CPU Frequency.

    • CPU Sleep states (C-States)
      • CPU Sleep states indicate during what intervals your CPU is asleep. The main goal of CPU Sleep states is to show when the system is sleeping, what sleep state it was in and what woke up the system. The higher the sleep state the lower the power usage.
    • CPU Frequency (P-states)
      • CPU’s also have the ability to change the core clock frequency, power usage will be higher the greater the frequency.

    The following diagram shows how these c-states and p-states inter-relate.

     Depending on your Android target you will need to use a different collector:

    •  Silvermont, Haswell systems
      • socwatch collector
    • Clovertrail+
      • wuwatch
    • Other embedded linux* systems. 
      • amplxe-cl

    The remainder of this article will focus on how you can collect C-state and P-state data for your system and visualize this data using Intel® VTune™ Amplifier 2015 for Systems using socwatch.

    Socwatch

    SoC Watch (aka SoCWatch) is a command line tool for monitoring system behaviors related to power consumption on Intel® architecture-based platforms. It monitors power states, frequencies, wakeups, and various other metrics that provide insight into the system’s energy efficiency.

    The following diagram shows how socwatch can be used.

     

    Installation

    SoCWatch is currently intended for Android-based systems. See the release notes for detailed system requirements.

    It is possible that your Android based system already had the socwatch kernel modules integrated into the system. If this is not the case then see the section marked “Building the kernel module”.

    Make sure your host is connected to the target via adb before running the install script.

    1. After unzipping the SoCWatch package on your host system, run the socwatch_android_install.sh script on a Linux host or from a Cygwin window on a Windows host. Run the socwatch_android_install.bat file from a Windows host.

    socwatch_android_install.sh or socwatch_android_install.bat

    The script installs the socwatch executables to the /data/socwatch directory on the target by default. Use the –d option to select a different install directory and the –s option to define a specific Android* device if multiple devices are connected to the host via adb.

    2. Using the adb command, start a shell with root privileges.

    > adb root

    > adb shell

    3. Navigate to the directory containing the driver.

    > cd /lib/modules

    4. Load the driver.

    > insmod SOCWATCH1_*.ko

    5. Confirm the driver is loaded.

    > lsmod

    6. When necessary, type rmmod SOCWATCH1_* to unload the driver.

    Building the Kernel Module

    If the SoCWatch device driver is not pre-installed in the OS image, you will need to build it.

    1. Copy the socwatch_android.zip file to the host system used to build the Android* kernel.

    2. Extract the contents with the command:

    > tar xvzf socwatch_android.zip The socwatch_android directory is created.

    3. cd into the socwatch_android/socwatch_driver directory.

    4. Execute the build_driver script with the command: > sh ./build_driver –k <kernel-build-dir> where <kernel-build-dir> is replaced with the local Android* lib/modules directory produced while building the Android* kernel. The –k switch is optional and is not required if the DEFAULT_KERNEL_BUILD_DIR value is properly set in the build_driver script.

    Collecting data on your system

    cd /data/socwatch

    . ./setup_socwatch_env.sh

    To collect c-state and p-state data on your system run the following socwtach command.

    >./socwatch –-max-detail –f cpu-cstate –f cpu-pstate –t 60 –o ./results/test

    This command will run for 60 seconds and create a files called test.sw1 and test.csv located in results directory.

    To view the colelcted data you need to copy the files back to your host system using adb.

    > adb pull /data/socwatch/results/test.csv c:\results

    > adb pull /data/socwatch/results/test.sw1 c:\results

    To import these files into VTune Amplfier 2015 for Systems run the following commands, from the windows prompt:

    1. First run the amplxe-vars.bat file in the VTune Amplifier install directory.
    2. Run amplxe-cl.exe -import test.sw1 C:\results\test

    This will create a results directory called test in C:\results. You can open this result directory using the following command:

    amplxe-gui.exe C:\results\test

    Frequency summary:

    Sleep state summary:

     

     

     

  • Android*
  • Android*
  • Intermedio
  • Telefono
  • Tablet
  • URL
  • Area tema: 

    IDZone

    How to do a system wide analysis using Intel® VTune™ Amplifier for Systems

    $
    0
    0

    Introduction

    Intel® System Studio is Intel’s suite of software tools for embedded processors and contains features that allow you to profile your entire system and not just the individual cores. This article will describe a new method of profiling your system on an embedded target.

    Background

    There are several  techniques you can use to profile your system. This article uses the command line version of VTune™ Amplifier for Systems known as amplxe-cl. The amplxe-cl program now contains the option to specify a remote target.  To profile your system you need to specify the proper analysis type, this article will analyze the memory bandwidh on a system.

    We will be using the Intel processor, code named Sandy Bridge. The command for profiling your system is as follows: 

        amplxe-cl –target ssh:root@ip_addr –collect snb-bandwith -d 5 --search-dir bin:p=<local directory containing modules>

    • -target ssh:root@ip_addr
      • This specifies that this will be a remote collection over ssh to the system running at internet address ip_addr.
    • -collect snb-bandwidth
      • This specifies that we will be collecting memory bandwidth on an Intel target, code named Sandy Bridge.
    • -d 5
      • This specified the collection will run for 5 seconds.
    • --search-dir bin:p=<local directory containing modules>
      • For system-wide collection, a lot of modules running in the system during the colelction are copied from the target to the host, which may take a while. However, this only happens once since amplxe-cl caches the target system modules on the host for faster access during the next collection. If you do not want the command to take the modules from the device you can specify a local directory on the host to be searched first as above. See the VTune Amplifier for Systems help for more information.

    VTune Amplifier 2014 for Systems will create a result directory containing the data you collected. To view the data you would use the command:

    amplxe-gui r001bw

    (where r001bw is the actual directory containing your results)

    Using the Bandwidth View inside of VTune Amplifier for Systems you can see the Read and Write bandwidth expressed in Gbytes per second.

    You can also view the PMU (core) and Uncore based events by using the Hardware Events Count view but you will need to view the PMU counters separately from the Uncore counters:

    PMU View:

    Uncore View:

    Summary

    VTune Amplifier for Systems contains some powerful new features for profiling your system. Using these remote capabilties, together with the predefined analysis types you can easily collect and view your profiling data and solve the most difficult performance bottlenecks.

     

  • Linux*
  • Yocto Project
  • Intermedio
  • Embedded
  • URL
  • Area tema: 

    IDZone

    Adding AdMob* by Google* to Your Cordova* Application

    $
    0
    0

    If you want to include AdMob* (by Google*) advertisements as part of your HTML5 hybrid mobile app, you will need to use a Cordova plugin. Unlike the desktop browser solution, mobile apps require a native code component to retrieve and display ads on mobile devices. Not all mobile ad services have this restriction, but if you want to serve AdMob advertisements in your app you will have to use a Cordova plugin.

    Intel does not endorse any specific advertising plugin for use with the Intel XDK! Be aware that some advertising plugins have fees or revenue sharing features built into them. We cannot guarantee which plugins are the best or most appropriate for use with your application.

    There are a variety of Cordova plugins available for serving ads; some serve ads from third-party sources, a few serve ads from the AdMob network. You are not required to use AdMob to serve ads, but only the use of AdMob plugins are described in this article. Plugins for ad services can be located by doing a general search of the web for "mobile ad services."

    AdMob plugins for Cordova can be found by searching the Cordova Plugins Registry or PlugReg (an independent Cordova plugin registry). Details regarding how to use the AdMob system as a means of monetization are available in the AdMob support pages.

    Before you can serve any AdMob ads you must sign up for an AdMob account at www.admob.com. AdMob does not charge a fee for creating an account or for serving up AdMob ads (be aware that some AdMob plugins vendors do charge a fee for the use of their plugin). If you have an AdMob account you must create a set of Ad Unit IDs that identify your ad impressions and provide these IDs as part of the AdMob API initialization sequence within your app. A screenshot of the online AdMob tool you use to create the Ad Unit IDs is shown below.

    IMPORTANT: each application should have its own set of Ad Unit IDs! If you do not yet have an app in an app store, you can use the "manual" method to identify your app for the purpose of obtaining Ad Unit IDs.

    To see the latest available samples provided with the Intel XDK it is best to check the "Work with a Demo" option under the "Start a New Project" button. In addition to those samples provided on the Intel XDK "Work with a Demo" panel, a few additional examples are listed below; they may use different AdMob plugins (from different authors):

    The article titled HTML5 Hybrid Apps with Admob* Cordova* Plugins provides details for several of the AdMob examples available in the Intel XDK.

    NOTE: because an AdMob application includes a third-party Cordova plugin, it will only run on a real device. You must use the Build tab to create an APK (for Android) or IPA (for iOS) to test your AdMob application. If you attempt to run this app using the Emulate, Test or Debug tabs the AdMob APIs will fail. Apps that utilize Cordova plugins can only be built using the Cordova build targets. Attempting to use the "legacy" build targets will not work.

    Using the Dolby API in the Intel XDK

    $
    0
    0

    When I think of Dolby technology, I usually think of movie theaters and high end stereos, adding life to soundtracks and music. It turns out that they've also been working on bringing some of that clever technology to hand-held devices. This technology is designed to enhance different sorts of audio, including games, movies and music. This is cool if you own one of these devices and can experience better audio, but it's also interesting to developers.

    As an app developer, you want to be able to take advantage of this technology to maximum effect. If you're designing a game, you want to enhance it by ensuring that the Dolby game profile is active. If you're playing music or video, you'll want to activate the appropriate profile. Fortunately the Dolby Cordova Plugin makes that easy.

    The dolby API consists of three parts:

    • dolby.dap - contains various functions you'll want to call, like initialize and setProfile
    • dolby.DapProfile - contains the available profiles (GAME, MUSIC, VOICE and MOVIE)
    • dolby.DapError - various error codes, provided as parameters to onFail callbacks

    If all you want to do is enable Dolby functionality, that's easy. The API relies on a Cordova Plugin which is available as a featured plugin in the Intel XDK. To access it, just go to the projects page, expand the "Plugins And Permissions" pane and check "Dolby Audio API". This will cause the plugin to be loaded in your app, exposed as a global variable dolby. For example, if you want to enable the feature for your game, you just need to add this to your initialization code:

    var onDsConnected = function() {
        console.log("Dolby started successfully");
    };
    var onFail = function(err) {
        console.log("Dolby failed due to ", err);
    };
    dolby.dap.initialize(dolby.DapProfile.GAME, onDsConnected, onFail);

    Now, if all you care about is enabling it, this will be fine. But you probably want something more sophisticated, say if you want to switch to the music profile for interstitial music while loading a new level:

    dolby.setProfile(dolby.DapProfile.Music);

    Aside from initializing and setting profiles, there are a few other convenient capabilities. Suppose you've got things set up for your App, with the profile set for your wicked action packed game, but then the user gets a phone call. The user, no doubt saavy about these things, has his phone set to use the VOICE profile, the better to hear his phone conversations. You don't want to mess with that, so when the game is interrupted, you want to defer to whatever the system setting is, and then go back to the GAME profile when it resumes. The Dolby API allows for that with the suspendSession() and restartSession() functions:

    onPause = function () {
        dolby.dap.suspendSession();
    };
    document.addEventListener("pause", onPause, false);
    
    
    onResume = function () {
        var success = function () {
            console.log("Resuming OK");
        }
        var err = function () {
            console.log("Problem resuming");
        }
    
        dolby.dap.restartSession(success, err);
    };
    document.addEventListener("resume", onResume, false);

    There are a couple other functions, like getSelectedProfile() to check the current setting, and release() to release control and go back to the system settings. All the details can be found in the documentation available at the github site - https://github.com/DolbyDev/Dolby-Audio-Plugin-for-Cordova or at Dolby's developer site - http://developer.dolby.com. I've also got a working demo app at https://github.com/oldgeeksguide/dolby-cordova-plugin-example that you can grab and import into the Intel XDK.

    And if you get a chance to try a device with this capability, check it out. It makes the builtin speakers sound better, and works great with headphones!

    Diagnostic 15304: non-vectorizable loop instance

    $
    0
    0

    Product Version: Intel(R) Visual Fortran Compiler XE 15.0.0.070  

    Cause:

    The vectorization report generated using Visual Fortran Compiler's optimization and vectorization report options (-Qvec-report2 -O2) includes non-vectorized loop instance when using the following compiler option (Win OS): -assume:dummy_aliases  

    Example:

    An example below will generate the following  remark in optimization report:

    subroutine foo(a, b, n)
      implicit none
      real, intent(inout) :: a(n)
      real, intent(in)    :: b(n)
      integer, intent(in) :: n
      integer :: i
    
      do i=1,n
             a(i) = b(i)+1
      end do
    
    end subroutine foo
    

     

    Report from: Vector & Auto-parallelization optimizations [vec, par]

    LOOP BEGIN at f15304.f90(8,3)
    <Multiversioned v2>
       remark #15304: loop was not vectorized: non-vectorizable loop instance from multiversioning
    LOOP END

    Resolution: 

    Using compiler option -assume:nodummy_aliases ( in Linux OS and OS X: -assume nodummy_aliases) will improve performance and get this loop to vectorize, however care should be taken to ensure that programs compiled with -assume:nodummy _aliasesoutput correct results.

     

    See also:

    Requirements for Vectorizable Loops

    Vectorization Essentials

    Vectorization and Optimization Reports

    Back to the list of vectorization diagnostics for Intel Fortran

     

  • remark
  • vectorization
  • Intel Compilers Vectorization Reports Optimization Reports
  • compiler
  • Fortran
  • Sviluppatori
  • Sviluppatori Intel AppUp®
  • Partner
  • Professori
  • Studenti
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • Fortran
  • Avanzato
  • Intermedio
  • Intel® Integrated Native Developer Experience (INDE)
  • Compilatori
  • Intel® Composer XE
  • Compilatore Fortran Intel®
  • Intel® Fortran Composer XE
  • Intel® Parallel Composer
  • Intel® Visual Fortran Composer XE
  • Intel® Cluster Studio XE
  • Intel® Parallel Studio XE
  • Intel® Parallel Studio XE Cluster Edition
  • Intel® Parallel Studio XE Composer Edition
  • Intel® Parallel Studio XE Professional Edition
  • Strumenti di sviluppo
  • Ottimizzazione
  • Elaborazione parallela
  • Vettorizzazione
  • URL
  • Esempio di codice
  • Argomenti sui compilatori
  • Controllo degli errori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone
  • Windows*
  • Diagnostic 15331: Using FP model: precise prevents vectorization

    $
    0
    0

    Product Version: Intel(R) Visual Fortran Compiler XE 15.0.0.070 

    Cause:

    When using Intel Fortran Compiler's option -fp:precise (Linux OS and OS X syntax: -fp-model precise) the vectorization report generated using Visual Fortran Compiler's optimization and vectorization report options ( -O2 - Qvec-report2 - Qopt-report:2includes non-vectorized loop instance. 

    Example:

    An example below will generate the following remark in optimization report:

     

    subroutine foo(a, b, n)
    	implicit none
        integer, intent(in) :: n
    	real, intent(inout) :: a(n)
    	real, intent(out) :: b
    	real :: x = 0
    	integer :: i
    
    	do i=1,n
    			x = x + a(i)
    	end do
    
    	b = x
    
    end subroutine foo

    Report from: Vector & Auto-parallelization optimizations [vec, par]

    LOOP BEGIN at f15331.f90(9,2)

    remark #15331: loop was not vectorized: precise FP model implied by the command line or a directive prevents vectorization. Consider using fast FP model 

    LOOP END

    Resolution:

    Using fast FP model (Win: -fp:fast , Linux and OS X syntax: -fp-model fast) option which allows more aggressive optimizations on floating-point data will get this loop vectorized.

    See also:

    fp-model, fp

    Vectorization Essentials

    Vectorization and Optimization Reports

     

    Back to the list of vectorization diagnostics for Intel Fortran

  • remark
  • vectorization
  • Intel Compilers Vectorization Reports Optimization Reports
  • Sviluppatori
  • Sviluppatori Intel AppUp®
  • Partner
  • Professori
  • Studenti
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Fortran
  • Avanzato
  • Intermedio
  • Intel® Composer XE
  • Compilatore Fortran Intel®
  • Intel® Fortran Composer XE
  • Intel® Parallel Composer
  • Intel® Visual Fortran Composer XE
  • Intel® Cluster Studio XE
  • Intel® Fortran Studio XE
  • Intel® Media Server Studio Professional Edition
  • Intel® Parallel Studio
  • Intel® Parallel Studio XE
  • Intel® Parallel Studio XE Cluster Edition
  • Intel® Parallel Studio XE Composer Edition
  • Intel® Parallel Studio XE Professional Edition
  • Strumenti di sviluppo
  • Ottimizzazione
  • Vettorizzazione
  • URL
  • Esempio di codice
  • Argomenti sui compilatori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone

    Diagnostic 15319: Using NOVECTOR directive

    $
    0
    0

    Product Version: Intel(R) Visual Fortran Compiler XE 15.0.0.070 

    Cause:

    When using NOVECTOR directive in code, the vectorization report generated using Visual Fortran Compiler's optimization and vectorization report options ( -O2 -Qopt-report:2includes non-vectorized loop instance. 

    Example:

    An example below will generate the following remark in optimization report:

    subroutine foo(a, b, n)
    	implicit none
    	integer, intent(in) :: n
    	real, intent(inout) :: a(n)
    	real, intent(in)    :: b(n)
    
        integer :: i
    
    !DEC$ NOVECTOR
    
           do i=1,n
    		     a(i)= b(i)+1
           end do
    
    end subroutine foo 

     

    Report from: Vector & Auto-parallelization optimizations [vec, par]

    LOOP BEGIN at f15319.f90(11,8)

     remark #15319: loop was not vectorized: novector directive used

    LOOP END

    Resolution:

    There may be cases where you want to explicitly avoid vectorization of a loop; for example, if vectorization would result in a performance regression rather than an improvement. In these cases, you can use the NOVECTOR directive to disable vectorization of the loop. 

    See also:   

    VECTOR and NOVECTOR

    Requirements for Vectorizable Loops

    Vectorization Essentials

    Vectorization and Optimization Reports

    Back to the list of vectorization diagnostics for Intel Fortran

  • Intel Compilers Vectorization Reports Optimization Reports
  • Sviluppatori
  • Professori
  • Studenti
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • Fortran
  • Avanzato
  • Intermedio
  • Intel® Integrated Native Developer Experience (INDE)
  • Intel® Composer XE
  • Compilatore Fortran Intel®
  • Intel® Fortran Composer XE
  • Intel® Parallel Composer
  • Intel® Visual Fortran Composer XE
  • Intel® Fortran Studio XE
  • Intel® Parallel Studio
  • Intel® Parallel Studio XE
  • Intel® Parallel Studio XE Cluster Edition
  • Intel® Parallel Studio XE Composer Edition
  • Intel® Parallel Studio XE Professional Edition
  • Strumenti di sviluppo
  • Ottimizzazione
  • Vettorizzazione
  • URL
  • Esempio di codice
  • Argomenti sui compilatori
  • Controllo degli errori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone
  • Windows*

  • Diagnostic 15335: Vectorization possible but seems inefficient

    $
    0
    0

    Product Version: Intel(R) Visual Fortran Compiler XE 15.0.0.070 

    Cause: 

    The vectorization report generated using Visual Fortran Compiler's optimization and vectorization report options  -O2 -Qvec-report2 -Qopt-report:2 states that loop was not vectorized: vectorization possible but seems inefficient.

    Example:

    An example below will generate the following remark in optimization report:

    subroutine foo(a, b, n)
        implicit none
        integer, intent(in) :: n
        real, intent(inout) :: a(n)
        real, intent(out)   :: b
        real :: x = 0
        integer :: i
    
        do i=1,n
                x = x + a(2*i)
        end do
    
        b = x
    
    end subroutine foo

     

     Report from: Vector optimizations [vec]

    LOOP BEGIN at f15335.f90(9,5)

       remark #15335: loop was not vectorized: vectorization possible but seems inefficient. Use vector always directive or /Qvec-threshold0 to override
    LOOP END

    Resolution:

    Using !DEC$ VECTOR ALWAYS directive in the code will vectorize the loop by overriding efficiency heuristics of the vectorizer.

    See also:   
    VECTOR and NOVECTOR

    Vectorization and Optimization Reports

    Back to the list of vectorization diagnostics for Intel Fortran

     

  • Intel Compilers Vectorization Reports Optimization Reports
  • Sviluppatori
  • Sviluppatori Intel AppUp®
  • Professori
  • Studenti
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • Fortran
  • Avanzato
  • Intermedio
  • Intel® Integrated Native Developer Experience (INDE)
  • Compilatore Fortran Intel®
  • Intel® Fortran Composer XE
  • Intel® Parallel Composer
  • Intel® Visual Fortran Composer XE
  • Intel® Fortran Studio XE
  • Intel® Integrated Native Developer Experience Build Edition per OS X*
  • Intel® Parallel Studio
  • Intel® Parallel Studio XE
  • Intel® Parallel Studio XE Cluster Edition
  • Intel® Parallel Studio XE Composer Edition
  • Intel® Parallel Studio XE Professional Edition
  • Ottimizzazione
  • Vettorizzazione
  • URL
  • Argomenti sui compilatori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone
  • Windows*
  • Diagnostic 15344: Loop was not vectorized: vector dependence prevents vectorization

    $
    0
    0

    Product Version: Intel(R) Visual Fortran Compiler XE 15.0.0.070

    Cause:

    When using Intel Visual Fortran Compiler's optimization and vectorization report options  -O2 -Qvec-report2 -Qopt-report:2 the vectorization report generated states that loop was not vectorized since vector dependence prevents vectorization.  

    Example:

    An example below will generate the following remark in optimization report:

    integer function foo(a, n)
        implicit none
        integer, intent(in) :: n
        real, intent(inout) :: a(n)
        real :: max
        integer :: inx, i
    
        max = a(0)
        do i=1,n
            if (max < a(i)) then
                max = a(i)
                inx = i*i
            endif
        end do
    
        foo = inx
    
    end function

    Report from: Vector optimizations [vec]

    LOOP BEGIN at  f15344.f90(9,5)
       remark #15344: loop was not vectorized: vector dependence prevents vectorization. First dependence is shown below. Use level 5 report for details   [ f15344.f90(12,13) ]
       remark #15346: vector dependence: assumed ANTI dependence between  line 10 and  line 10   [ f15344.f90(12,13) ]
    LOOP END

    Resolution:

    Rewriting the code as in the following example will resolve vector dependence and the loop will be vectorized

    integer function foo(a, n)
        implicit none
        integer, intent(in) :: n
        real, intent(inout) :: a(n)
        real :: max
        integer :: inx, i
    
        max = a(0)
        do i=1,n
            if (max < a(i)) then
                max = a(i)
                inx = i
            endif
        end do
    
        foo = inx*inx
    
    end function
    

    See also:

    Requirements for Vectorizable Loops

    Vectorization Essentials

    Vectorization and Optimization Reports

    Back to the list of vectorization diagnostics for Intel Fortran

     

     

     

  • Intel Compilers Vectorization Reports Optimization Reports
  • Sviluppatori
  • Sviluppatori Intel AppUp®
  • Professori
  • Studenti
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • Fortran
  • Avanzato
  • Intermedio
  • Intel® Integrated Native Developer Experience (INDE)
  • Compilatore Fortran Intel®
  • Intel® Fortran Composer XE
  • Intel® Parallel Composer
  • Intel® Visual Fortran Composer XE
  • Intel® Fortran Studio XE
  • Intel® Integrated Native Developer Experience Build Edition per OS X*
  • Intel® Parallel Studio
  • Intel® Parallel Studio XE
  • Intel® Parallel Studio XE Cluster Edition
  • Intel® Parallel Studio XE Composer Edition
  • Intel® Parallel Studio XE Professional Edition
  • URL
  • Argomenti sui compilatori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone
  • Windows*
  • Diagnostic 15414: Loop was not vectorized: loop body became empty after optimizations

    $
    0
    0

    Product Version: Intel(R) Visual Fortran Compiler XE 15.0.0.070

    Cause:

    The vectorization report generated when using Visual Fortran Compiler's optimization options (-O2  -Qopt-report:2) states that loop was not vectorized since loop body became empty after optimizations.

    Example:

    An example below will generate the following remark in optimization report:

    integer function foo(a, b, n)
        implicit none
        integer, intent(in) :: n
        real, intent(inout) :: a
        real, intent (in)   :: b
        integer :: i
    
        do i=1,n
               a = b + 1
        end do
    
        foo = a
    
    end function 

    Report from: Vector optimizations [vec]

    LOOP BEGIN at f15414.f90(8,5)
       remark #15414: loop was not vectorized: nothing to vectorize since loop body became empty after optimizations
    LOOP END

    Resolution:

    In the example above, there is only one expression inside the loop. When moved outside the loop as a result of the compiler's optimization process there is nothing else left inside the loop to vectorize. 

    See also:

    Requirements for Vectorizable Loops

    Vectorization Essentials

    Vectorization and Optimization Reports

    Back to the list of vectorization diagnostics for Intel Fortran

  • vectorization
  • optimization options
  • optimization
  • loop vectorization
  • Intel Compilers Vectorization Reports Optimization Reports
  • Optimization Reports
  • Sviluppatori
  • Partner
  • Professori
  • Studenti
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • Fortran
  • Avanzato
  • Intermedio
  • Intel® Integrated Native Developer Experience (INDE)
  • Compilatore Fortran Intel®
  • Intel® Fortran Composer XE
  • Intel® Parallel Composer
  • Intel® Visual Fortran Composer XE
  • Intel® Fortran Studio XE
  • Intel® Integrated Native Developer Experience Build Edition per OS X*
  • Intel® Parallel Studio
  • Intel® Parallel Studio XE
  • Intel® Parallel Studio XE Cluster Edition
  • Intel® Parallel Studio XE Composer Edition
  • Intel® Parallel Studio XE Professional Edition
  • Strumenti di sviluppo
  • Elaborazione parallela
  • Vettorizzazione
  • URL
  • Argomenti sui compilatori
  • Miglioramento delle prestazioni
  • Area tema: 

    IDZone
  • Windows*
  • Intel® XDK "Cordova for Android" Build Options

    $
    0
    0

    The Intel® XDK "Cordova for Android" and "Crosswalk for Android" build systems use a special configuration file in your project source directory to direct the build process. The Cordova build option is based on the open source Apache* Cordova CLI build system. When you use the Cordova build option your application project files are submitted to the Intel XDK build server where a Cordova CLI system is hosted and maintained, there is no need to install the open source Cordova CLI system on your workstation.

    The following options described on this doc page pertain only to Android and Crosswalk builds. They will not affect builds for other target platforms. It is generally okay to include these options in your build config file when submitting a build for other platform targets (such as iOS and Windows Phone 8); in most cases, these options will be ignored when building for those other targets.

    NOTE: as of version 1199, the build configuration files are automatically generated based on input you provide on the Projects tab. A special intelxdk.config.android.xml and intelxdk.config.crosswalk.xml file is automatically generated. The options described below can be use inside an intelxdk.config.additions.xml file to exercise finer grained control over the Projects tab and to control features that are not handled by the Projects tab.

    For detailed information regarding the structure and contents of theintelxdk.config.xml file please read Using the Intel XDK Cordova Build Option article.

    Android Launch Icon Specifications

    If no icon files are provided with your project, the build system will provide default icons. It is highly recommended that you replace the default icons with icons of your own before submitting your application to a store. See this article on the Android Developer site for details regarding Android launch icons. If you do not provide custom icons it is likely that your application will be rejected from the Android store.

    Icon files must be provided in PNG format. The height and width numbers in the table (below) are in pixels.

    DensityWidthHeight
    xxxhdpi ‡192192
    xxhdpi †144144
    xhdpi9696
    hdpi7272
    mdpi4848
    ldpi *3636

    * ldpi icon files are optional; if not provided the Android OS will automatically downscale your hdpi icon by a factor of two.

    † xxhdpi icon files are only supported on hi-res Android 4.4 and above devices; these icon resolutions are not supported by Cordova 3.3.

    ‡ xxxhdpi icon files are not supported by any Android devices, at this time; these icon resolutions are not supported by Cordova 3.3.

    The launch icon you include with your application, when you submit it to the Google Play Store, must be 512x512 pixels in size. This icon is not included in your application, it is part of your store submission package. The Intel XDK does not include a store submission tool, you must submit your application manually using your Android developer account.

    Android Splash Screen Image Specifications

    Your application will display a splash screen during initialization. This is done to provide a "getting ready" indication while your app and the underlying device API infrastructure initializes. The build system provides default images if no splash files are provided with your project. It is highly recommended that you replace the default splash screen images with your own before submitting your application to a store. See the splash screen section of the Cordova for * Build Options article for more information.

    Splash screen images can be provided in PNG, JPG or JPEG formats. The height and width numbers in the table (below) are the minimum recommended pixel sizes for the respective screen densities. See this article on the Android Developer site for details regarding Android screen sizes. NOTE: the dimensions shown below assume a landscape orientation, reverse the numbers for portrait.

    DensityWidthHeight
    xhdpi960720
    hdpi640480
    mdpi470320
    ldpi426320

    For greater adaptability of your splash screens, you should use 9-patch images. For more information see this StackOverflow posting and Android developer tools article.

    NOTE: there have been issues with splash screens on the Cordova for Android platform when building with the Intel XDK. We have resolved the issue regarding landscape and portrait splash screens (they now work as expected). However, we are still working on making nine-patch splash screens work. When nine-patch splash screens work this notice will be removed. Until then for best results, design your splash screens to a 16:9 screen ratio and that should minimize the distortion on most modern Android phones and tablets (for example, the Samsung S3 has a 16:9 screen ratio and the Nexus 7 tablets have a 16:10 ratio). The splash screen resolutions specified above are minimum recommend dimensions, not absolute required dimensions.

    Android Build Preferences

    android-minSdkVersion
    Specifies the minimum required Android operating system version on which the application will install and run. For best results it is recommended that you specify Android 2.3.3 or higher (Android 2.3.3 = 10), this value is the default if no minimum is provided. See "Versioning Your Applications" for an overview regarding how to assign version numbers to Android applications.
    <preference name="android-minSdkVersion" value="<N>" />
    <N> is a number representing the minimum supported Android version.
    android-targetSdkVersion
    Specifies the target Android operating system. This is the version of Android you have tested against and will be used by the operating system to insure that future versions of your application continue to run by observing compatibility behaviors. See "Versioning Your Applications" for an overview regarding how to assign version numbers to Android applications.
    <preference name="android-targetSdkVersion" value="<N>" />
    <N> is a number representing the target Android version.

    The value 19 is recommended to minimize the effect of Android 4.4 "webview quirks mode" when running your application on Android 4.4+ devices. However, some applications may need this "webview quirks mode" behavior and should then specify a lower API level (such as 17 or 18) Start with 19 and insure your app works on Android 4.4+ devices; if you see issues on Android 4.4+ devices, try using 17 or 18 and see if your app works better there.
    android-installLocation
    Specifies where the application can be installed (internal memory, external memory, sdcard, etc.).
    <preference name="android-installLocation" value="<LOCATION>" />
    <LOCATION> indicates where the application can be installed.
    android-windowSoftInputMode
    Specifies how the application interacts with the on-screen soft keyboard.
    <preference name="android-windowSoftInputMode" value="<INPUTMODE>" />
    <INPUTMODE> determines the state and features of the keyboard.
    android-permission
    For adding application device permissions.
    <preference name="android-permission" value="<PERMISSION"> />
    <PERMISSION> is the permission identifier. The list of valid permissions may vary depending on the version of the Android operating system that is targeted. See the Manifest Permission List on the Android Developer site for a complete list of Android permissions.

    NOTE: the NETWORK permission will always be part of your Cordova application, even if no Cordova plugins have been included in your application. This is due to the way the Cordova framework communicates ("bridges the gap") between the HTML5 JavaScript layer and the underlying native code layer.
    android-signed
    Allows you to indicate if the application will be signed (for distribution in the Android store).
    <preference name="android-signed" value="<bool>" />
    <bool>indicates whether the application should be signed. A value of "false" indicates the application will not be signed, and "true" indicates the application will be signed. The default is "true".

    OpenCL™ Code Samples

    $
    0
    0
    Viewing all 664 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>