Quantcast
Channel: Intermediate
Viewing all 664 articles
Browse latest View live

Intel® Cluster Ready Architecture Specification version 1.3.1 Summary

$
0
0

The Intel® Cluster Ready architecture specification version 1.3.1 has officially released as of July 2014.  This is a minor update from version 1.3 with most of the changes between the versions are related to the following:

  • removal of library or tool requirements based on analysis of Intel® Cluster Ready registered applications
  • updated/refreshed required versions of key libraries and tools

Details of the updates to the architecture requirements:

4.2 Base Software Requirements

4.2.4 Updated to require Intel® Cluster Checker version 2.2 or later.
Rationale: Intel® Cluster Checker version 2.2 adds the capability to check compliance against Intel® Cluster Ready this version (1.3.1) of the architecture specification.

6.1 OS Kernel

6.1.3 Added an exception for the OS kernel of Red Hat* Enterprise Linux* 5.10 was added.
Rationale: the patched kernel from Red Hat satisfies the requirement.

6.2 OS Interface and Basic Runtime Environment

6.2.7 (new sub-bullet in version 1.3) Separated the requirement for the Intel MPI Library Runtime Environment and now only require the LP64 version.
Rationale: ILP32 components may not be available, depending on the version used.

6.2.9 (was sub-bullet 6.2.8 in version 1.3) Added footnote related to libcrypto.so.6 related to versions that satisfy the requirement.
Rationale: equivalent versions of libcrypto.so.X are included in Linux* distributions in various ways; the footnote is meant for clarity and provide guidance that a symlink may be used.

6.2.11 (was sub-bullet 6.2.10 in version 1.3) Removed the requirement for libI810XvMC.so.1.
Rationale: analysis showed this library was not used by registered applications; some newer versions Linux* distributions have deprecated the library.

6.3 Command System and Tools

6.3.5 Removed the requirements for the X11 applications xfontsel, xload, xfd, x11perfcomp, xmessage, xclipboard, and x11perf.
Rationale: analysis showed these utilities were not used by registered applications; some newer versions Linux* distributions have deprecated some or all of these tools.

In addition to updates to the requirements, the version 1.3.1 updates many of the advisory statements regarding suggested versions of libraries and tools.

Details of the updates or changes to the architecture advisories:

9.4 OS Interface, Basic Runtime Environment – advisory statement

9.4.1a Advises gfortran runtime to version 4.7 (or later)
9.4.1b Advises gcc and g++ runtime versions to 4.7 (or later)
9.4.1c Added to advise including libreadline.so.4, libreadline.so.5 and libreadline.so.6
9.4.2 Advises gcc version to 4.7 (or later)
9.4.3 Clarifies the advisory to suggest using the latest available runtime components

9.6 Command System and Tools – advisory statement

9.6.3: Advises Perl version to 5.18 (or later)
9.6.4: Advises Python version to 3.3.4 (or later)
9.6.5: Advises X11 version to X11/R7.7 (or later)

9.8 Message Fabric – advisory statement

9.8.1 Advises OFED release to version 3.5 (or later)

10.2 Relationships to Linux Standard Base

10.2 Updated to reflect Linux* Standard Base (LSB) version 4.1

  • Intel Cluster Ready
  • HPC
  • cluster administration
  • cluster tools
  • MPI
  • ofed
  • Sviluppatori
  • Partner
  • Studenti
  • Linux*
  • Avanzato
  • Principiante
  • Intermedio
  • Intel® Cluster Ready
  • URL
  • Area tema: 

    IDZone

    Last Updated: 

    Giovedì, 24 Luglio, 2014

    Intel(R) Metrics Framework Getting Started Guide

    AR Jewellery Develops a New Shopping Paradigm for Tablets

    $
    0
    0

    With augmented reality (AR) reaching smartphones, tablets, wearables (such as Google Glass*), and other platforms, the market is ripe for an AR development explosion across every conceivable application niche. Developer Serhiy Posokhin and his wife Antonina Posokhina, a designer by trade, recognized the potential of this technology in the world of jewelry and pursued its potential through the Intel® App Innovation Contest 2013 (AIC 2013). The resulting app, AR Jewellery, a program that lets users visualize what a ring would look like on their hands in real time, won the contest’s Retail category and now points the way for other developers to explore and expand in this massive market space.

    AR Jewellery: Formation and Function

    Designed specifically to leverage a tablet’s integrated cameras and touch capabilities, AR Jewellery takes live camera input and superimposes a 3D model—of a selected ring, in this case—on top of a marker. The marker is a small, black-and-white glyph in the middle of a narrow strip that the user prints and cuts out. The user places the strip around his or her finger so that the glyph is in a ring’s usual position. AR Jewellery recognizes the glyph, places the virtual ring in the glyph’s position, and keeps it there, moving and rotating as needed along with the user’s movements. In effect, the user is “trying on” the ring, or a realistic digital facsimile of it.


    Figure 1:AR Jewellery uses a glyph to help with 3D object tracking in real time.

    The touch-based application can use either front- or rear-facing tablet cameras and displays the live stream in a window surrounded by icons for various ring designs and program functions. Dual-camera capability was a key element in Posokhin’s design concept, because he noticed that when women try on jewelry in the mall, they usually hold the ring on their hand at arm’s length and then up near their face to see it in a mirror. The dual-camera approach suits this dual-view paradigm. 

    When users like a ring they have virtually tried on, they can capture a snapshot and post it directly to Facebook. “I imagined how convenient it would be if women were able to try on jewelry without leaving home,” said Posokhin. “They could share photos, drop hints about a desired gift, or reserve a product online.”

    AR Jewellery Figure 2
    Figure 2:Users can “try on” different rings contained in the program’s library of 3D-modeled designs.

    Challenges Addressed During Development

    Posokhin devoted considerable time to studying AR technology before setting to work on his application. He researched the various available libraries and frameworks and had to rethink his usual approach to pattern recognition and 3D graphics. As a result of his advance planning, Posokhin encountered few development challenges but admitted that he did encounter some cosmetic challenges.

    Posokhin’s greatest challenge was purely aesthetic. The paper glyph that users place on their finger is clearly noticeable, and it distracts from enjoying a seamless AR experience. Initially, he tried to have AR Jewellery operate without a marker, but his attempts to have the software accurately recognize and track a finger failed. A glyph-based approach, on the other hand, has been extensively developed and documented for many years. (To learn more on the subject, Posokhin recommends reading Andrew Kirillov’s extensive AForge.Net article.)

    Posokhin plans to eliminate the need for a tracking glyph in AR Jewellery, and he also plans to broaden its catalog. He is quite optimistic about the Intel® RealSense™ 3D camera and feels that having the camera and capabilities it provides will allow him to have the higher accuracy necessary to track fingers and dispense with using a glyph.

    “At Mobile World Congress in 2014,” he noted, “I talked to Intel engineers who were demonstrating a [Intel] RealSense [3D] camera. I asked whether it is possible to recognize every finger separately to calculate the coordinates of a 3D model and put the ring in the correct place of the ring finger. They explained that this should be possible in the new Intel® RealSense™ SDK!”

    Another cosmetic issue centered on AR Jewellery’s user interface. For a cleaner, more engaging look, Posokhin’s wife, Antonina, wanted the live camera view to fill the tablet screen rather than sit inside of a wide, white border filled with icons. However, to make the contest deadline, Posokhin had to leave this feature for a future version.

    AR Jewellery Figure 3
    Figure 3: AR Jewellery leverages both tablet cameras to accommodate how women generally like
    to observe their prospective purchases.

    Resources Used

    As an AIC 2013 finalist, Posokhin was one of 300 developers to receive a Lenovo ThinkPad* Tablet 2 from Intel. This award allowed him to get a feel for how the app would handle in a realistic setting, because it’s the sort of tablet one might find among a higher-end clientele amenable to using technology to assist with their luxury shopping. The Tablet 2 offers several features in line with next-generation mobile apps, including noise-canceling array microphones, multi-touch 10-inch IPS display, the Windows* 8 OS, and a four-thread, 1.80-GHz Intel® Atom® processor Z2760 with integrated Intel® HD Graphics with an SGX545 core. The tablet’s two cameras were especially useful for AR Jewellery, with the front offering 2MP resolution and the rear providing 8MP. The higher-quality cameras help AR Jewellery to deliver superior results to image-conscious users. Additionally, the Tablet 2 provided a capable test environment for an AR mobile application. 

    Posokhin is a strong believer in helping the programming community. As such, he was happy to provide code snippets from AR Jewellery that focus on some of the software’s key features. Some of these snippets were rather lengthy, but two jumped out as being short, yet intriguing.

    The snippet in Figure 4 describes how AR Jewellery stores product information in an SQLite* database:

    AR Jewellery Figure 4
    Figure 4: AR Jewellery uses this code to enter new product models, integrating various metadata attributes.

    The ability to capture and share photos from the UI to Facebook utilizes the code in Figure 5:

    AR Jewellery Figure 5
    Figure 5:Posokhin knows that AR Jewellery users want to share what they see with their friends 
    and family. This Facebook functionality makes the task easy.

    In creating AR Jewellery, Posokhin experimented with several tools, including:

    In the end, though, the only tool he relied on for positional AR tracking was Glyph Recognition and Tracking Framework. “This library allowed me to implement almost everything I needed,” noted Posokhin.

    Posokhin also recommends reading the article “Windows 8* Store vs Desktop App Development” and watching the “HowTo [sic] Create 3D Blender Model for use in WPF” video series, both of which he found useful at different points in the AR Jewellery process.

    Lessons Learned and Forward Thoughts

    Posokhin suggests that, whenever possible, touch developers look for opportunities to replace screen taps with gestures—an intuitive approach. When representing a world of 3D objects, seek to let users manipulate those objects in three dimensions, not the flat plane of a touch screen. Of course, making gestures pervasive in AR apps requires many industry pieces to be in place, not the least of which is a widely adopted interface standard. 

    “Remember when browsers did not yet have a single standard?” asked Posokhin. “HTML pages in each browser would open in different ways. We all had lots of discomfort associated with this. Today, we have a similar situation with AR browsers. But as far as I am aware, with ARML 2.0 [Augmented Reality Markup Language], the leading developers of AR software already agree that a common set of gesture commands should be created. Once we have an agreed-upon set of UI controls, we can create new applications easier and faster.”

    The ARML 2.0 Standards Workgroup is governed by the Open Geospatial Consortium (OGC). The workgroup published its 2.0 specification in November 2012. Efforts to make the spec a world standard were pushed forward when AR heavyweights Layar, Metaio, and Wikitude demonstrated ARML 2.0 technology at the Mobile World Congress in Barcelona. Progress in ARML 2.0 adoption continues.

    Resources

    Intel developer tools and programs increasingly seek to nudge developers into such next-generation approaches to building applications and assist them in succeeding in a world brimming with sensors and free from screen-size constraints.  Posokhin remains grateful for the opportunity that Intel App Innovation Contest 2013 presented. The contest is one of many avenues Intel promotes to help developers build powerful, forward-thinking applications able to take full advantage of the latest Windows 8 and Intel technologies across multiple device platforms. With the development tools he has available, Posokhin will be able to quickly optimize AR Jewellery for 2-in-1 Ultrabook™ devices, all-in-ones, and other form factors as well as tablets.

    The seamless use of camera, touch, and keyboard/mouse input types was a major factor in AR Jewellery’s winning the Retail category of AIC 2013. The more apps that can embrace this sort of multi-modal paradigm, the more ready their developers will be for the coming era of perceptual computing.

    About Serhiy Posokhin

    Serhiy Posokhin is a technical director and co-owner at the IT services firm TEAM, Ltd. in the Ukraine, as well as the founder of ToniKa Design Studio with his wife, Antonina. Posokhin sees so much potential in the AR market that he created AR magix, a start-up company dedicated to actualizing that potential. The company’s first app, skewed toward girls who want to “try on” a “dress of beautiful summer flowers,” can be tried in the Windows Store here. “Our mission is to delight people, using the magic of augmented reality,” he noted. 

    Related Articles

    Intel®Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed.  Join our communities for the Internet of Things, Android*, Intel® RealSense™ Technology and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

    Intel, the Intel logo, Intel Atom, Intel Core, Iris, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    *Other names and brands may be claimed as the property of others.
    Copyright © 2014. Intel Corporation. All rights reserved.

  • AR Jewellery
  • Augmented Reality
  • App Innovation Contest 2013
  • glyph
  • shopping app
  • retail app
  • Sviluppatori
  • Microsoft Windows* 8
  • Esperienza utente
  • Windows*
  • Intermedio
  • Desktop Microsoft Windows* 8
  • Tablet
  • URL
  • Area tema: 

    IDZone

    Black-Scholes-Merton Formula on Intel® Xeon Phi™ Coprocessor

    $
    0
    0

    Download Available under theIntel Sample Source Code License Agreement license

    Introduction

    Financial derivative pricing is a cornerstone of quantitative Finance. The most common form of financial derivatives is common stock options, which are contracts between two parties regarding buying or selling an asset at a certain time at an agreed price. The two types of options are: calls and puts. A call option gives the holder the right to buy the underlying asset by a certain date for a certain price. A put option gives the holder the right to sell the underlying asset by a certain date for a certain price. The asset or contract price is called the exercise price or strike price. The date in the contract is known as the expiration date or maturity. American options can be exercised at any time before the expiration date. European options can be exercised only on the expiration date.

    Typically, the value of an option f, is determined by the following factors:

    • S – the current price of the underlying asset
    • X – the strike price of the option
    • T – the time to the expiration
    • σ – the volatility of the underlying asset
    • r – the continuously compounded risk-free rate

    In their 1973 paper, “The pricing of Options and Corporate Liabilities”, Fischer Black and Myron Scholes created a mathematical description of financial markets and stock options in frameworks built by researchers from Luis Bachelier to Paul Samuelson. Jack Treynor arrived at partial differential equations, which Robert Merton first referred as Black-Scholes Model.

    Black-Scholes-Merton Formula 01

    This PDE has many solutions, corresponding to all the different derivatives with the same underlying asset S. The specific derivative obtained depends on the boundary conditions used while solving this equation. In the case of European call options, the key boundary condition is 

    fcall = max(S-K, 0) when t=T

    In the case of European put options, it is

    fput = max(K-S, 0) when t=T

    Black-Scholes-Merton Formula

    Shortly after Black-Scholes’s historical paper, Robert Merton was the first one to publish a paper recognizing the significance and coined the term Black-Scholes option pricing model. Merton is also credited with a closed-form solution to the Black-Scholes Equation for European call options c, and the European put option p known as the Black-Scholes-Merton Formula.

    Black-Scholes-Merton Formula 02

    The function N(x) is the cumulative normal distribution function. It calculates the probability that a variable with a standard normal distribution of Ф(0,1) will be less than x. In most cases, N(x) is approximated using a polynomial function defined as:

    Black-Scholes-Merton Formula 04

    Code Access

    The source code for Black-Scholes-Merton formula is maintained by Shuo Li and is available under the BSD 3-Clause Licensing Agreement. The program runs natively on Intel® Xeon Phi™ coprocessors in a single node environment.

    To get access to the code and test workloads, go to the source location and download the BlackScholes.tar file.

    Build Directions

    Here are the steps for rebuilding the program:

    1. Install Intel® Composer XE 2013 SP 2 on your system.
    2. Source the environment variable script file under
    3. Untar the BlackScholes.tar
    4. Type make to build the binary
    5. make
      icpc -DFPFLOAT -O3 -ipo -mmic -fno-alias -opt-threads-per-core=4 -openmp -restrict -vec-report2 -fimf-precision=low -fimf-domain-exclusion=31 -no-prec-div -no-prec-sqrt -DCOMPILER_VERSION=\""icpc-20140120"\" -ltbbmalloc -o MonteCarloSP.knc MonteCarlo.cpp
      icpc  -O3 -ipo -mmic -fno-alias -opt-threads-per-core=4 -openmp -restrict -vec-report2 -fimf-precision=low -fimf-domain-exclusion=31 -no-prec-div -no-prec-sqrt -DCOMPILER_VERSION=\""icpc-20140120"\"  -ltbbmalloc -o MonteCarloDP.knc MonteCarlo.cpp
    6. Executable Files
      1. For Single Precision processing: BlackScholesSP.knc
      2. For Double Precision processing: BlackScholesDP.knc

    Run Directions

    Copy the following files to the Intel® Xeon Phi™ coprocessor

    [prompt]$ scp BlackScholesSP.knc yourhost-mic0:
    [prompt]$ scp BlackScholesDP.knc yourhost-mic0:
    [prompt]$ scp /opt/intel/composerxe/lib/mic/libiomp5.so yourhost-knc1-mic0:
    [prompt]$ scp /opt/intel/composerxe/tbb/lib/mic/libtbbmalloc.so yourhost-knc1-mic0:
    [prompt]$ scp /opt/intel/composerxe/tbb/lib/mic/libtbbmalloc.so.2 yourhost-knc1-mic0:

    Enable turbo on the Intel Xeon Phi coprocessor

    [prompt]$ sudo /opt/intel/mic/bin/micsmc --turbo status
    mic0 (Turbo Mode Status):
       Turbo Mode is DISABLED
    mic1 (Turbo Mode Status):
       Turbo Mode is DISABLED
    [prompt]$ sudo /opt/intel/mic/bin/micsmc --turbo enable
    Information: mic0: Turbo Mode Enable succeeded.
    Information: mic1: Turbo Mode Enable succeeded.

    Make sure your Intel Xeon Phi coprocessor is C0-7120P/7120X/7120

    Black-Scholes-Merton Formula 05

    Set the environmental variables and invoke the executables files from the host OS environment.

    Black-Scholes-Merton Formula 06

    The program was built on the host and executes on the Intel Xeon Phi coprocessor. It processes close to 16M sets of option data and averages 64K data sets for each thread of 244 threads. The program goes into a loop 1000 times to read the option input data and price the options. Each time, the program processes an input set of option input data, it calculates both European call and European put values. We count calls and puts separately as we calculate the options/sec. Besides options/sec, the program also outputs total cycles spent for pricing activities, cycles spent for each option pair, a measure of and total times elapsed. Data validation is part of the program. During the validation phase, the input data goes through the unoptimized scalar code to create the masters for comparison to the optimized, vectorized, and parallelized results.

    This benchmark runs on a single node on the coprocessor. It can also be modified to run in a cluster environment.

    Implementation Notes

    Our first attempt of optimizing the calculation of the Black-Scholes-Merton Formula included using mathematical equivalences, taking advantage of the capabilities available in development tools, and using target-specific capabilities.

    Put-Call Parity

    Using c and p equations from the Introduction section, notice that:

    Black-Scholes-Merton Formula 07


    This simply means that once you get the call option price c, you can get the put option price p with a simple addition and subtraction of intermediate results.

    N(x) and erf(x)
    N(x) is a cumulative normal distribution function. Mathematically it’s usually represented as the capital Greek letter Ø. So N(x) = Ø(x)

    Black-Scholes-Merton Formula 08


    With this relationship, if there is a fast implementation of erf(x), it can be used for N(x). The additional addition and multiplication do not penalize performance if one function can take advantage of SIMD execution and the other cannot. As part of the vectorized runtime library, the Intel Compiler provides a vectorized erf(x) function callable in scalar or SIMD data.

    Natural base vs. 2’s base

    Natural based logarithms and exponentials have been used extensively in financial calculations because of the continuous compounding of the time value of money. However in computer arithmetic, base 2 logarithms and exponential calculations can take advantage of the table lookup implementation and usually have a performance advantage compared to the natural base logarithm. In extreme cases on the Intel Xeon Phi coprocessor, log2(x) and exp2(x) are implemented as machine instructions with 1 and 2 cycles of throughput, while ln(x) and exp(x) are implemented as C runtime function calls.

    Using the change of base formula, you can quickly find out how to adjust the parameter or the result by calling base 2 versions of logarithms and exponents instead of natural base versions.


    Black-Scholes-Merton Formula 09

    Both ln2 and log2e are constants defined in the C runtime library and included in the math.h file as M_LN2 and M_LOG2E, which makes it easy to replace expensive exp(x) with exp2(M_LOG2E*x) and log(x) with M_LN2*log2(x).


    About the Author

    Shuo Li works for the Intel Software and Service Group. His main interests are parallel programming and application software performance. In his current role as a staff software performance engineer covering the financial service industry, Shuo works closely with software developers and modelers and helps them achieve high performance with their software solutions. Shuo holds a Master's degree in Computer Science from the University of Oregon and an MBA degree from Duke University.

    References and Resources

    [1]Intel® Xeon® processor:  http://www.intel.com/content/www/us/en/processors/xeon/xeon-processor-e7-family.html

    [2]Intel® Xeon Phi™ coprocessor:  https://software.intel.com/en-us/articles/quick-start-guide-for-the-intel-xeon-phi-coprocessor-developer

  • Black-Scholes
  • Intel Xeon Phi
  • Sviluppatori
  • Linux*
  • Server
  • Intermedio
  • Architettura Intel® Many Integrated Core
  • Elaborazione parallela
  • Server
  • Contratto di licenza: 

    Allegati protetti: 

    AllegatoDimensione
    ScaricaBlackScholes_July2014.tar30 KB
  • URL
  • Area tema: 

    IDZone

    Last Updated: 

    Mercoledì, 23 Luglio, 2014

    Adicionando Plugins de Terceiros à sua App Cordova do Intel® XDK

    $
    0
    0

    Os plugins do Apache* Cordova* são ferramentas importantes para melhorar os recursos e funcionalidades da sua aplicação móvel desenvolvida em HTML5 com o Intel® XDK. Eles te fornecem uma forma de extender a API JavaScript da sua aplicação, resultando em uma melhor integração entre sua aplicação e o software e hardware do dispositivo. Existem centenas de plugins disponíveis para o Cordova (e para o Adobe* PhoneGap*) que você pode usar na sua aplicação. Eles podem ser encontrados no Registro de Plugins do Apache Cordova e em outros registros similares, bem como em muitos repositórios open source do github. Um exemplo disso, é que se você trabalha para uma grande empresa, o seu departamento de TI pode até manter um conjunto de plugins do Cordova que eles desenvolveram para suportar usuários móveis na sua empresa.

    O Intel XDK referencia e utiliza os plugins do Cordova em uma variedade de locais através do ciclo de desenvolvimento de sua aplicação móvel com HTML5. A utilização mais aparente é nas abas "Projects" e "Build". Na aba "Projects" (descrita abaixo), você seleciona quais plugins do Cordova serão inclusos como parte de sua app. A aba "Build" irá então adicionar automaticamente estes plugins ao pacote de sua app quando ele fizer o empacotamento dela, permitindo assim que a API estendida fornecida por estes plugins possa ser utilizada pela sua app. Além de "Projects" e "Build", os plugins são ainda utilizados no editor Brackets (o editor de texto da aba "Develop"), e no emulador (aba "Emulate").

    Na aba "Develop", os plugins são utilizados para implementar o code hiting (uma característica comum de editores de código, comumente chamada de "intelli-sense" ou "auto-hinting"). O editor automaticamente fornece sugestões de métodos e propriedades da API para as APIs centrais do Cordova e do Intel XDK. Neste momento, o code hinting é fornecido para todos os plugins principais, independente dos plugins que você selecionou como parte da sua app na aba "Projects".

    A aba "Emulate" leva em consideração quais plugins core do Cordova você selecionou na aba "Projects", e irá apresentar para a sua aplicação durante a emulação apenas as APIs que corresponderem aos plugins selecionados. O conjunto completo de APIs fornecidas pelos plugins do Intel XDK estão sempre disponíveis para a aplicação durante a emulação, independente dos plugins selecionados na aba "Projects".

    NOTA: As abas "Test", "Debug", "Profile" e "Services" não saão afetadas pelos plugins que você selecionou na aba "Projects". Além disso, o "App Preview" e o "App Preview Crosswalk" também não são afetados pelas configurações de plugins do seu projeto. Elas suportam apenas os plugins core do Cordova e do Intel XDK.

    Somente a aba "Build" faz uso de qualquer plugin de terceiros que for especificado para ser incluído na sua app através da aba "Project". Edições futuras do Intel XDK pretendem expandir o uso de plugins de terceiro ao longo do ciclo de desenvolvimento. Por enquanto, a única forma de você testar e depurar apps que incluam plugins de terceiros é através da compilação da app e execução dela em um dispositivo real.

    O que é um Plugin do Cordova?

    Parafraseando o Cordova Plugin Development Guide:

    Um plugin é um pacote de código que permite que sua aplicação Cordova HTML5 se comunique com a plataforma nativa onde é executada. Os plugins fornecem acesso a funcionalidades da plataforma que normalmente não estão disponíveis a aplicações baseadas em navegadores. O core do Cordova (e do Intel XDK) são implementadas como plugins. Muitos outros plugins estão disponíveis e disponibilizam funcionalidades como um scanner de código de barras, comunicação NFC e acesso aos bancos de dados nativos dos telefones e tablets (como a lista de contatos ou o calendário).

    Os plugins são compostos por uma API JavaScript e por módulos de código nativo (para cada plataforma suportada pelo plugin). Estes módulos suportam a API JavasScript do plugin. Essencialmente, quando sua app chama uma API em JavaScript do plugin, ela é redirecionada para o módulo de código nativo que o suporta, que finalmente acessa a API nativa no dispositivo. Por exemplo, a API JavaScript é redirecionada para código em Java em um dispositivo Android ou código em Objective C em um dispositivo iOS. Os plugins podem ser complexos ou simples: fornecendo APIs tão complexas quanto uma engine de banco de dados persistente ou tão simples quanto um método para acender o LED de flash da câmera do dispositivo. 

    Preciso aprender a escrever código nativo em Java e Objective C e C# e ???

    Absolutamente não. Muitos plugins podem ser puxados diretamente de um registro de plugins ou de um repositório github e usados como estão, sem a necessidade de aprender como o plugin opera internamente. Você não precisa "compilar" nada para usar um plugin bem estruturado, muitos estão prontos para a utilização sem necessidade de nenhuma configuração ou programação adicional.

    Você vai precisar aprender como utilizar a API JavaScript do plugin para poder utiliza-lo em sua aplicação. Você pode pensar em um plugin Cordova como uma biblioteca JavaScript que estende as funcionalidades nativas que sua app pode acessar, funcionalidades que tipicamente não são acessíveis por um navegador ou uma webview (o navegador embarcado que interpreta a sua aplicação HTML5 híbrida). Os plugins fornecem funcionalidades extra que distinguem uma aplicação móvel de uma web app tradicional.

    Alguns pontos importantes para ter em mente sobre os plugins Cordova e o Intel XDK: 
    Como muitos plugins são bibliotecas de terceiros, o Intel XDK pode não ter conhecimento explícito das funcionalidades ou código de um plugin. As ferramentas de depuração incluídas no Intel XDK fornecem suporte apenas para os plugins core do Cordova e para os plugins do Intel XDK.
    Nem todos os plugins são criados igual, e muitos plugins estão disponíveis apenas para as plataformas Android e iOS. Os plugins "core" do Cordova e os plugins da API do Intel XDK suportam uma ampla lista de plataformas Cordova. Certifique-se de que os plugins que planeja utilizar suporta as plataformas que você pretende distribuir a sua app, ou utilize técnicas de detecção de plataforma e funcionalidades para implementar uma solução alternativa para plataformas não suportadas.
    Nem todos os plugins suportam todas as plataformas com um comportamento idêntico da API. Em outras palavras, alguns aspectos da API de um plugin pode variar em função da plataforma (isto ocorre normalmente por conta de detalhes da plataforma, não porque o plugin é incompleto ou deficiente). Variações incluem propriedades que não tem nenhum significado em algumas plataformas, ou métodos que não existem em outras. Veja a documentação do plugin para estes detalhes (alguns plugins incluem uma seção de peculiaridades - "quirks" - em sua documentação), e utilize funções de detecção de plataforma e funcionalidades para lidar com estas peculiaridades.
    O Intel XDK não inclui um mecanismo para julgar a qualidade de um plugin. Existem diversos recursos na web, incluindo as tags cordova-plugins e phonegap-plugins no StackOverflow que podem ser usadas para determinar quais plugins são mais confiáveis e como desviar dos bugs associados a plugins específicos. Além disso, você pode obter suporte diretamente do autor de muitos plugins hospedados no github, caso encontre problemas com eles.
    Alguns plugins de terceiros foram escritos para versões antigas do Cordova e podem não funcionar com o Cordova 3.x. Se você não consegue encontrar uma versão de um plugin de terceiros que trabalhe com o Cordova 3.x, pode ser possível converter um plugins pré-Cordova 3.x para trabalhar com o Cordova 3.x. O Intel XDK requer plugins que tenham sido escritos para o Cordova 3.x.
    As APIs "core" do Cordova e as APIs do Intel XDK foram todas escritas como plugins do Cordova 3.x. Os plugins "core" do Cordova 3.x são mantidos pela comunidade de desenvolvimento do Cordova CLI. Os plugins do Intel XDK são mantidos pelo time de desenvolvimento do Intel XDK.
    Plugins de terceiros não podem ser utilizados com builds "legados" do Intel XDK (ver a aba "Build"), eles podem ser utilizados apenas com builds Cordova e Crosswalk for Android. Entretanto, builds "legados" incluem uma coleção de plugins "core" do Cordova quando você os compila com a opção "Gold"; estes plugins "legados" são baseados nna versão 2.9.0 do Cordova e podem ser habilitados na sua aplicação incluindo <script src="cordova.js"></script> depois do include do script "intelxdk.js".
    Os serviços da AppMobi (como o PushMobi) que são incluídos no sistema de build "legado" não estão disponíveis como plugins do Cordova (no momento em que este documento foi escrito). Se você não consegue identificar uma alternativa equivalente e precisa utilizar um serviço da AppMobi, sua única alternativa é continuar a utilizar o sistema de build "legado" ou solicitar que a AppMobi te forneça um plugin compatível com o Cordova 3.x para o serviço deles que a sua app necessita.
    Se você está  desenvolvendo seu próprio plugin Cordova você pode ter que instalar e utilizar o sistema Cordova CLI em sua máquina de desenvolvimento. Você pode compartilhar o seu plugin com outros desenvolvedores sem que eles também precisem instalar o Cordova CLI em seu sistema, somente você (o desenvolvedor do plugin) vai precisar instalar o Cordova CLI (e somente para o desenvolvimento de plugins, pois o Intel XDK não precisa do Cordova CLI instalado em sua máquina de desenvolvimento para incluir o plugin na sua app). 
    O Intel XDK não fornece um mecanismo de depuração de código nativo de um plugin Cordova, o que deve ser feito usando as ferramentas de desenvolvimento de código nativo específicas da plataforma nativa. A aba "Emulate" não utiliza código nativo Cordova para simular as APIs dos plugins, ela usa um código escrito para o ambiente node-webkit em que é executada, para fornecer a simulação do componente de código nativo. Entretanto, para os plugins que a aba "Emulate" suporta, somente o componente JavaScript de cada plugin é utilizado dentro da aba "Emulate".
    As aplicações Intel App Preview que você baixa das lojas para depuração rápida da sua aplicação móvel em HTML5 nos dispositivos não possuem suporte a plugins Cordova de terceiros. Para depurar uma app que precisa de um plugin não-core do Cordova, você pode usar detecção de funcionalidades para pular a depuração ou utilizar o "simular a saída" de um plugin quando ele não estiver presente (este é o caso para quando a sua aplicação está executando dentro do App Preview, na aba "Emulate" ou quando estiver usando as abas "Debug" e "Test"). Você pode compilar a sua aplicação (usando a aba "Build"), para que os plugins de terceiros sejam incluídos no pacote e finalmente executar a aplicação compilada em um dispositivo real.
    Quando este documento foi escrito, as aplicações App Preview para Android, iOS e Windows 8 ainda estavam baseadas no build "legado", e portanto, não representam precisamente o comportamento da sua aplicação dentro de um container padrão Cordova. Você pode continuar a utilizar o App Preview "legado" para depurar a sua app Cordova, mas tenha em mente que existirão algumas diferenças de funcionalidades e comportamento. As apps do App Preview serão atualizadas para utilizar o container Cordova adequado para permitir a representação correta de uma aplicação Cordova.
    No momento em que este documento foi escrito, o sistema de build Cordova do Intel XDK está baseado na versão 3.3 do Cordova CLI.

    Incluindo um Plugin do Cordova em sua App do Intel XDK

    Incluir qualquer um dos plugins "core" do Cordova ou plugins do Intel XDK em sua app é muito fácil. A aba "Projects" da sua app contém uma lista de plugins incluídos, que pode ser alterada para adicionar ou remover os plugins de sua app, simplesmente clicando na check box próxima ao nome do plugin. Veja a tela abaixo, de uma aplicação de exemplo.

    Detalhes sobre quais APIs estão incluídas em cada plugin core do Cordova podem ser encontrados na Documentação do Apache Cordova, na seção API Reference. Consulte a Intel XDK API Reference Documentation para detalhes de APIs e plataformas sobre os plugins da API do Intel XDK.

    Na imagem acima, existem quatro botões azuis na parte de baixo do painel de seleção dos plugins: "Select All,""Select Minimum,""Select None" e "Reset Plugin Defaults":

    • Select All: habilita TODOS os plugins core do Cordova e TODOS os plugins do Intel XDK para que sejam incluídos em sua app. Isto é conveniente, mas não recomendado para a versão de produção da sua app. Este é o estado padrão dos plugins quando você cria um novo projeto ou importa um projeto existente utilizando a versão 0876 ou anterior do Intel XDK. Incluir TODOS os plugins é a grosso modo o equivalente ao estado dos plugins existentes em uma app desenvolvida com o o sistema de build "legado". Selecionar TODOS os plugins também significa que você está sujeitando a sua app a um número elevado de permissões que deverão ser aceitas pelo usuário final durante a instalação da app (nas plataformas Android e Windows 8; o iOS solicita do usuário final a permissão durante a utilização da aplicação que necessita de uma API específica). Incluir todos os plugins também significa que sua app vai ter um tamanho maior do que o necessário. 

    • Select Minimum: habilita um pequeno conjunto de plugins do Cordova e do Intel XDK. Este é o conjunto mínimo recomendado, e não o conjunto mínimo requerido de plugins. Se você estiver utilizando o evento device ready do Intel XDK você vai precisar incluir no mínimo o plugin Intel XDK "Base". Se você está usando o evento device ready do Cordova, você não precisa incluir nenhum plugin. Obviamente, se você está usando tanto as APIs do Cordova ou Intel XDK (ou seja, APIs além das APIs padrão do HTML5), você vai precisar incluir os plugins correspondentes a cada API que sua app necessita. Veja os documentos de referências citados acima para informações sobre quais plugins fornecem quais APIs.

    • Select None: limpa todas os plugins "core" do Cordova e do Intel XDK do seu projeto. Não tem nenhum efeito em plugins de terceiros importados utilizando o painel "Third-Party Plugins".

    • Reset Plugin Defaults: reseta a versão de cada um dos plugins core do Cordova para corresponderem às versões fornecidas com o Intel XDK. Este botão não terá efeito algum se você nunca tiver alterado os números de versão de qualquer um dos plugins core do Cordova. Mais informações sobre versões dos plugins core são fornecidas abaixo. Note que os plugins do Intel XDK não possuem número de versão selecionável, portanto este botão não tem nenhum efeito nestes plugins.

    Sobre os Plugins Core

    Os plugins core do Cordova e os plugins do Intel XDK são incluídos no Intel XDK; como estes plugins são incluídos como parte do Intel XDK, eles possuem uma "versão padrão" associada a eles. Estas são as versões que serão utilizadas caso o botão "Reset Plugins Default" seja pressionado (como descrito acima).

    Você pode alterar um número de versão de um plugin selecionando o botão editar (passe o mouse sobre o plugin, como mostra a imagem abaixo): 

    e então digite o número de versão desejada para o plugin (como mostrado abaixo):

    Veja as páginas de Documentação do Apache Cordova para detalhes sobre os plugins core do Cordova. O repositório do git onde cada plugin é mantido inclui detalhes sobre as versões dos plugins, etc. Você pode determinar rapidamente quais versões estão disponíveis para um determinado plugin inspecionando o Registro de Plugins do Apache Cordova. Neste momento, somente a aba "Build" utiliza o número de versão do plugin, enquanto os outros componentes do Intel XDK utilizam um conjunto fixo de plugins do Cordova.

    Incluindo Plugins de Terceiros

    Existem duas formas de incluir plugins de terceiros em sua aplicação: através de um repositório público ou de um diretório local. Você seleciona o método específico selecionando Import Local Plugin" ou "Get Plugin from the Web" na seção "Third-Party Plugins" do painel "Plugins and Permissions" da aba "Project".

    Dois repositórios públicos são suportados: um repositório git (como o github), ou o Registro de Plugins do Apache Cordova. Quando utilizar o registro do Cordova, você só vai precisar do ID do plugin, que pode ser encontrado na entrada do plugin no registro (veja a imagem abaixo para um exemplo de como referenciar o registro do Cordova somente com o ID de um plugin). Você também pode opcionalmente fornecer um número de versão de plugin (mais informações abaixo), como parte do campo de ID do plugin na caixa de diálogo "Get Plugin from the Web".

    Se o seu plugin de terceiros estiver sendo recuperado do registro do Cordova, o Nome e o ID do Plugin são suficientes. Neste caso, marque a caixa de seleção "Plugin is located in the Apache Cordova Plugins Registry" e clique no botão "Import".

    De outra forma, se o seu plugin de terceiros estiver localizado em um repositório git, você também precisa incluir o endereço deste repositório. O repositório git precisa ser publicamente acessível na Internet, pois o "git pull" utilizado para recuperar o plugin será executado no servidor baseado na nuvem e não no Intel XDK; portanto ele precisa residir em um repositório gir acessível publicamente.

    Se você tem familiaridade com o comando plugin add do Cordova CLI, você pode utilizar a sua sintaxe para adicionar uma versão específica de plugin baseada tanto no número da versão armazenado no registro do Cordova ou um ID de referência no git. Mais detalhes podem ser encontrados na seção Advanced Plugin Options da documentação do Cordova CLI. Se você não especificar uma versão de plugin ou ID de referência, sua app será compilada utilizando a versão mais recente disponível no registro do Cordova ou do branch padrão quando recuperada de um repositório git público.

    Importando um plugin de terceiros que reside e um diretório local requer que o plugin esteja localizado dentro do diretório que contém o código fonte de sua aplicação. Normalmente este diretório é chamado "www" e está localizado dentro da pasta que contém o seu projeto (ver a seção "Project Info" na aba "Project" para o nome e a localização do seu diretório de projeto). Um plugin local será incluído com o pacote de fontes que é enviado para o servidor de build na nuvem; o conteúdo completo do seu "diretório fonte"é enviado neste pacote.

    Referências aos seus plugins de terceiros, tantos os importados de pastas locais quanto os localizados em repositórios públicos, são listados na seção "Third-Party Plugins" da aba "Projects" (veja a imagem abaixo como exemplo). O campo Nome que você especificou acima é arbitrário e é usado estritamente como um identificador aqui e no log de mensagens de build. O ID de Plugin deve ser igual ao especificado dentro do arquivo plugin.xml (veja o registro ou o repositório git do plugin). Neste momento, não existe forma de editar ou inspecionar os dados que você forneceu durante o processo de importação do plugin; se você precisar alterar o Nome ou o ID do Plugin ou outros campos, você deve remover a referência ao plugin (clique no ícone (x)) e importar novamente o plugin com o Nome, ID e outros campos com seus novos valores.

    A imagem abaixo mostra o que você tipicamente vai encontrar quando inspecionar um plugin que está localizado no Registro de Plugins do Apache Cordova. Note o campo de ID de plugin, plataformas suportadas, versão de plugin e versão suportada do Cordova CLI (aka "Engine Number"). No momento em que este documento foi escrito, o servidor de build do Intel XDK estava baseado no Cordova CLI versão 3.3.

    Fazendo o Build da sua App Cordova

    Para fazer o build do pacote da sua aplicação, baseado no container Cordova e os plugins que você selecionou, vá até a aba "Build" e selecione a plataforma para qual você quer gerar um pacote instalável, dentro da seção Cordova 3.x Hybrid Mobile App Platforms".

    Ambos "Crosswalk for Android" e "Android" geram APKs para dispositivos Android. Veja o guia Using the Intel XDK “Crosswalk for Android” Build Option e o Crosswalk Overview para mais detalhes.

    Quando você iniciar um build, será perguntado se pretende fazer o upload para o servidor de build ("Upload to the build server?"). Normalmente você deve selecionar "Upload Code" quando esta pergunta for feita. A exceção usual a isso é quando você tiver enviado anteriormente o seu código e compilado para uma plataforma, e agora está compilando para uma segunda plataforma sem nenhuma alteração na sua aplicação entre estes dois builds. Neste caso, não há necessidade de enviar novamente o seu código para o servidor.

    Um upload feito com sucesso do pacote com os fontes da sua aplicação irá resultar em uma tela similar a mostrada abaixo. Para iniciar o build, clique no botão "Build App Now". Ao contrário do sistema "legado", não existem opções associadas a este passo; suas opções são armazenadas no arquivo intelxdk.config.platform.xml. Veja o documento Adding Build Options to Your Intel® XDK Cordova App Using intelxdk.config.additions.xml para informações sobre como adicionar opções de compilação que não são acessíveis através da aba "Projects"

    Os builds de iOS builds incluem uma opção para fornecer o seu certificado de desenvolvedor Apple. Este certificado está armazenado no sistema de build junto com seu id de usuário do Intel XDK; só é necessário fornecer o certificado uma vez, para todas as aplicações que você compilar com o seu login.

    Quando o sistema de build completar com sucesso, você vai ver uma tela parecida com a tela abaixo. Se o sistema de build encontrar algum problema, o log de build irá incluir uma mensagem de erro indicando a natureza do problema que está impedindo o build da aplicação. Se você encontrar erros de build, visite o fórum do Intel XDK para obter ajuda (um link para o fórum pode ser encontrado no ícone (?) dentro do Intel XDK).

    Um Exemplo Simples Utilizando Plugins do Cordova

    As capturas de tela de um dispositivo mostradas abaixo, apresentam uma app híbrida com HTML5 sendo executada em um dispositivo Android. Esta aplicação foi compilada utilizando os plugins mostrados nas seções anteriores deste documento. Como exemplo, os plugins core do Cordova "Device,""Media,""Accelerometer" e "Compass" foram selecionados na coluna "Core Cordova Plugins" da seção "Included Plugins" na aba "Projects" e os plugins Intel XDK "Base" foi selecionado na coluna "Intel XDK Cordova Plugins" nesta mesma seção. Além disso, o plugin de terceiros "Cordova StatusBar" foi incluído usando a funcionalidade "Get Plugin from the Web".

    A app gera dinamicamente uma lista de plugins disponíveis, em runtime, inspecionando uma propriedade especial JavaScript do Cordova 3.3. Os resultados são impressos em um elemento <textarea> na parte de baixo da página index.html da app.

    Note que a versão do plugin "Device" do Cordova foi alterada do padrão 0.2.5 para 0.2.10. Note ainda que a versão reportada do Cordova é 3.3.0, o que bate com a versão do Cordova CLI utilizada com o sistema de build Cordova do XDK. Ao contrário do sistema "legado" de build do Intel XDK, onde todas as APIs do Intel XDK estavam disponíveis, no sistema de build do Cordova, somente as APIs associadas aos plugins do Intel XDK que foram selecionados estão presentes, neste caso, somente os métodos e propriedades que pertencem ao plugin Intel XDK "Base".

    A única diferença entre as duas capturas de tela dos dispositivos mostradas acima é a visibilidade da barra de status no alto da tela. Um toque no botão "Toggle Status Bar" chama os métodos StatusBar.hide() ou StatusBar.show() do plugin StatusBar, dependendo do estado de visibilidade da barra de status, que é determinada através da propriedade StatusBar.isVisible do plugin. Se o plugin não tivesse sido incluído, estes métodos e esta propriedade não estariam disponíveis para a app, e referências a eles resultariam em erros undefined do JavaScript.

    Dispositivos Reais vs Dispositivos Simulados

    A captura de tela abaixo mostra a mesma app sendo executada dentro da aba "Emulate" do Intel XDK. Existem diversas diferenças notáveis, e ajuda a ilustrar algumas diferenças chave entre executar a sua app com plugin dentro do emulador (ou no App Preview ou na aba de "Debug"), e executar a app em um dispositivo real.

    Lembre-se que as capturas de tela mostradas acima foram tiradas de um dispositivo real que estava executando a app compilada com o Adroid Cordova dentro do Intel XDK, na aba "Build".

    O que está diferente?

    • As versões do Cordova não batem: a aba "Emulate" está usando o Cordova 3.4, enquanto a app compilada no dispositivo está usando a versão 3.3 (na prática, é uma diferença pequena).
    • O plugin "Device", core do Cordova, está usando a versão padrão (0.2.5), enquanto a app compilada está usando uma versão mais nova, que foi especificada manualmente na aba "Projects" (0.2.10).
    • O plugin de terceiros "StatusBar" não é mostrado na lista de plugins incluídos quando a app é executada no emulador. Como resultado, tocar no botão "Toggle Status Bar" não resulta em nenhuma alteração na barra de status do dispositivo emulado, pois a API StatusBar não está presente neste ambiente de execução.

    Executar esta app dentro do App Preview (indiretamente através da aba "Test" ou diretamente através do menu do App Preview) ou na aba de "Debug" (que executa a app em um build de preview especial do Crosswoalk para Android) vai mostra um outro conjunto de resultados. Se você executar esta app nestes ambientes, você verá uma lista longa de plugins "core", como se você tivesse selecionado todos os plugins disponíveis na lista "Included Plugins" na aba "Projects". Isso é normal e causado pela forma com que estas aplicações de proview funcionam. Você também não vai ver quaisquer plugins de terceiros. As APIs destes plugins de terceiros não estão acessíveis de dentro destes ambientes de testes de apps.

    Uma versão da app mostrada acima pode ser encontrada neste repositório do github: https://github.com/xmnboy/test-third-party-plugin.

    Multi-Stage Post-Processing RenderScript for Android* on Intel® Architecture

    $
    0
    0

    Introduction

    What is RenderScript? It is a framework designed by Google for performing data-parallel computation on Android-based devices. RenderScript kernels and intrinsics may be accelerated by an integrated GPU. Thanks to the tight integration of RenderScript with the Android OS through Java* APIs, you can perform data-parallel computations from Java applications efficiently. The RenderScript runtime also takes care of efficiently scheduling and parallelizing work on the right processor available on a device.

    Implementing the “Old Movie” Effect

    The “old movie effect,” also known as “film look,” is a popular video effect that applies film-style lighting, noise, camera motion (jittering), black-and-white conversion, specific framing, simulated grading over time, and other effects associated with aged films. This tutorial: https://software.intel.com/en-us/articles/tutorial-camera-video-stream-processing-with-renderscript demonstrates a basic implementation of the “old movie” effect, including:

    • Stains
    • Scratches (vertical)
    • Hairs and irregular scratches
    • Camera jittering
    • Intensity and color variations
    • Semi-transparent vignette

    To simplify the code, the tutorial keeps only one random and vertical scratch, one stain, and so on at a time. Still, all the effects rely on random number generation to evolve over time in a visually convincing way.

    In the following image, you can see how RenderScript is used to implement the popular “Old Movie” video effect.


    Figure 1:  “Old Movie” video effect

    Multi-Stage Post-Processing with RenderScript

    The first step in the application logic is converting the camera input (preview) frames from YUV to RGB using ScriptIntrinsicYuvToRGB intrinsic (built-in) class. Then the image is blurred with another intrinsic function, ScriptIntrinsicBlur. After that, the simple setup function is called per frame (to advance time step and so on) and finally the post-processing kernel itself.

    To make the process asynchronous (per general Android guidelines to avoid expensive operations in the main UI thread), use the following example of a simple class that extends basic AsyncTask functions with what is needed:

    private class ProcessData extends AsyncTask
    {
    protected Boolean doInBackground(byte[]... args)
    {
    …
    // Run the scripts
    // 1) conversion to RGB
    intrinsicYuvToRGB.setInput(allocationYUV);
    intrinsicYuvToRGB.forEach(allocationIn);
    // 2) apply Blur
    intrinsicBlur.setInput(allocationIn);
    intrinsicBlur.forEach(allocationBlur);
    // 3)update filter state
    script.invoke_update_state();
    // 4)set filter args and run the OldMovie filter
    script.forEach_filter(allocationBlur, allocationOut);
    //5) wait for completion
    rs.finish();
    //6)propagate the results back to the bitmap
    Android RenderScript Tutorial
    8
    allocationOut.syncAll(Allocation.USAGE_SHARED);
    …
    }
    

    Notice that the very first input allocation (allocationYUV) gets the bytes directly from the camera preview frames. Copying is required to keep the process asynchronous, which enables the camera to proceed without blocking until the processing is done:

    //copying bytes from the buffer (to let camera to proceed with next frames)
    allocationYUV.copyFrom(arg0);[/cpp]
    
    
    
    In contrast, the final (output) allocation, using the allocationOut method, is connected to the bitmap upon creation to enable zero-copy updates. Refer to the OnCreate method of the following sample Activity:
    
    [java]// Create an allocation (which is memory abstraction in the Renderscript)
    // that corresponds to the outputBitmap
    allocationOut = Allocation.createFromBitmap(rs,outputBitmap);
    

    Finally, the bitmap is displayed using the regular ImageView function?. Refer to the following method of the ProcessData class:

    protected void onPostExecute(Boolean result) {
    …
    outputImageView.setImageBitmap(outputBitmap);
    outputImageView.invalidate();
    RenderScriptIsWorking = false;
    }
    


    Figure 2:  “Old Movie” video effect

    Related Resources

    A good tutorial about RenderScript, including code samples, guidelines on using, and more, can be found at: https://software.intel.com/en-us/articles/renderscript-basic-sample-for-android-os

    RenderScript tutorial:  https://software.intel.com/en-us/articles/tutorial-camera-video-stream-processing-with-renderscript

    About the Authors

    Stanislav Pavlov works in the Software & Service Group at Intel Corporation. He has 10+ years of experience in technologies. His main interest is optimization of performance, power consumption, and parallel programming. In his current role as a Senior Application Engineer providing technical support for Intel®-based devices, Stanislav works closely with software developers and SoC architects to help them achieve the best possible performance on Intel platforms. Stanislav holds a Master's degree in Mathematical Economics from the National Research University Higher School of Economics. He is currently pursuing an MBA in the Moscow Business School.

    Maxim Shevtsov is a Software Architect in the OpenCL™ performance team at Intel. Maxim got his Master’s degree in Computer Science in 2003 and, prior to joining Intel in 2005, was involved in various academia studies in computer graphics.

    Notices

    INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

    UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.

    Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.

    The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

    Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.

    Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm

    Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

    Any software source code reprinted in this document is furnished under a software license and may only be used or copied in accordance with the terms of that license.

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

    Copyright © 2014 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

    OpenCL and the OpenCL logo are trademarks of Apple Inc and are used by permission by Khronos.

  • Renderscript
  • movie effect
  • Android
  • Sviluppatori
  • Android*
  • Intermedio
  • Telefono
  • Tablet
  • URL
  • Area tema: 

    Android

    Adicionando o Google AdMob* à sua aplicação Cordova*

    $
    0
    0

    Se você quiser "servir" publicidade do Google AdMob* como parte da sua app móvel híbrida com HTML5, você vai precisar utilizar um plugin Cordova. Ao contrário da solução para navegadores de desktop, aplicações móveis precisam de um componente de código nativo para obter e mostrar os anúncios em um dispositivo móvel. Nem todos os serviços de publicidade móvel possuem esta restrição, mas se você quer utilizar o Google AdMob você vai precisar de um plugin do Cordova.

    Existem diversos plugins do Cordova disponíveis para a entrega de anúncios; alguns servem anúncios de fontes de terceiros e poucos servem anúncios da rede do Google AdMob. Você não é obrigado a utilizar o Google AdMob para servir os anúncios, mas apenas os plugins do Google AdMob serão descritos neste artigo.

    O plugin oficial do Google AdMob está disponível em github.com/gooogleadmob. No momento em que este artigo foi escrito, estava disponível apenas para as plataformas Android e iOS. Outro plugin popular para o Google AdMob está disponível em github.com/floatinghotpot/cordova-plugin-admob. O plugin "FloatingHotPot" suporta três plataformas móveis: Android, iOS e Windows Phone. Plugins adicionais para publicidade podem ser encontrados através de uma busca no Registro de Plugins do Cordova ou no PlugReg (um registro independente de plugins do Cordova), ou simplesmente através de uma busca na web por "mobile ad services".

    Detalhes sobre como utilizar o sistema AdMob como forma de monetização estão disponíveis nas páginas de suporte do AdMob.

    Antes que você possa servir qualquer anúncio do AdMob, você precisa de uma conta no AdMob que pode ser criada em www.admob.com. Não existe nenhum custo associado a criação da conta, ou para apresentar os anúncios dentro da sua app. Se você já possui uma conta no AdMob, tudo o que você vai precisar fazer para usar o Google AdMob Plugin é criar os IDs Ad Unit apropriados que identificam as impressões de seus anúncios, e fornecê-los como parte da sequencia de inicialização da API do AdMob dentro da sua aplicação. Uma tela capturada da ferramenta do AdMob que você usa para criar o Ad Unit ID é mostrada abaixo.

    IMPORTANTE: cada aplicação precisa ter seu próprio conjunto de IDs Ad Unit! Se você ainda não tem uma app na loja de apps, você pode utilizar o método "manual" para identificar a sua app para obter os seus IDs Ad Unit.

    O repositório do Google AdMob plugin no github inclui diversos exemplos que podem te ajudar a entender como incluir os anúncios em sua app. O exemplo mais simples é um arquivo index.html localizado no repositório do plugin para o PhoneGap, que é uma app de um único arquivo. Se você quiser criar um exemplo no Intel XDK baseado neste exemplo do PhoneGap, siga estes passos:

    1. Vá até a aba "Projects".
    2. Selecione "Start a New Project" na parte de baixo esquerda da tela.
    3. Selecione "Start with a Blank Project."
    4. Substitua o arquivo index.html padrão no seu novo projeto com o conteúdo do exemplo referenciado acima.
    5. Conecte seus IDs Ad Unit (um para um anúncio "banner" e outro para um anúncio "interstitial") nos locais apropriados no código de exemplo, e salve o arquivo index.html.

    Finalmente, vá até a aba "Projects" e utilize o "Get Plugin from the Web" no painel "Third-Party Plugins". Veja a captura de tela no final deste artigo e consulte o artigo Adicionando Plugins de Terceiros à sua App Cordova do Intel® XDK para mais detalhes sobre a utilização de plugins em suas apps Cordova.

    NOTA: como sua aplicação de testes inclui um plugin de terceiros, ela somente será executada em um dispositivo real. Você deve utilizar a aba "Build" para criar um APK (para o Android) ou um IPA (para o iOS) para poder executar a sua aplicação. Se você tentar executar esta app usando as abas "Emulate", "Test" ou "Debug", o AdMob irá falhar.

    Como esta app utiliza um plugin do Cordova, ela pode apenas ser compilada usando o Cordova. Se você tenar utilizar o build "legado", a app não vai funcionar.

    https://github.com/MobileChromeApps/google-play-services

    https://github.com/MobileChromeApps/mobile-chrome-apps/tree/master/chrome-cordova/plugins/chrome.identity

    http://developer.android.com/google/play-services/index.html

    -->

    What’s new in Intel® Cluster Checker version 2.2

    $
    0
    0

    Intel® Cluster Checker version 2.2 is an update in conjunction with Intel® Cluster Ready Architecture Specification version 1.3.1 release and also adds additional functions and capabilities.

    The version includes:

    • Added support for verifying compliance to Intel® Cluster Ready architecture version 1.3.1
    • Enhanced infiniband configuration checking for systems using Intel® True Scale InfiniBand* Host Channel Adapters
    • Enhanced dgemm test module configuration
    • Merger of micperf and stream test modules
    • Merger of micmpi and mpi_internode modules
    • Auto detection of Intel® Xeon Phi™ coprocessors
    • Enhanced dgemm testing for native execution mode on Intel® Xeon Phi™ coprocessors
    • Auto creation of groups based on node architecture
    • Enhanced hpl test module to include Intel® Xeon Phi™ coprocessors
    • Enhanced imb_pingpong test module to support Intel® Xeon Phi™ coprocessors
    • Enhanced kernel test module to also check kernel parameter consistency.

    This version also resolves issues from previous releases of the tool:

    • Mount test module should now pass on SLES11SP2* for all non-English locales.
    • The hpl test module updated to depend on the imb_pingpong test module.
    • Default set of InfiniBand* kernel modules were updated to reflect a more general list.
    • The imb_pingpong test module enabled for configurations using Intel® Xeon Phi™ coprocessors without InfiniBand*.
    • Fixed issue with the hardware test module identifying the correct amount of system memory when running as unprivileged user.
    • Fixed issue with the kernel test module not properly excluding values using the param_exclude tag in the configuration file.
    • Fixed issue with the remote_login test module reporting failure of login latency check.

    For more information on Intel® Cluster Ready go to http://www.intel.com/go/cluster.

     

  • Intel Cluster Ready
  • Intel Cluster Checker
  • HPC
  • clusters
  • Sviluppatori
  • Partner
  • Linux*
  • Avanzato
  • Principiante
  • Intermedio
  • Intel® Cluster Checker
  • Intel® Cluster Ready
  • Elaborazione basata su cluster
  • Server
  • URL
  • Area tema: 

    IDZone

    Intel® MKL PARDISO

    $
    0
    0

     

     

    The following is a compilation of the Intel® MKL PARDISO related articles from the Knowledge Base.

     

     

     

  • MKL PARDISO
  • PARDISO Landing Page
  • Avanzato
  • Principiante
  • Intermedio
  • Intel® Math Kernel Library
  • URL
  • Area tema: 

    IDZone

    Dual-Screen Social Cast Demo App Using Miracast* on Android*

    $
    0
    0

    Social Cast is an Android Miracast Dual Screen App Sample written by Intel’s IT Flex Software Engineering team. The code sample is available for download.

    By combining a Miracast connection to a TV, social media APIs, and utilizing Android’s background task feature, the Social Cast demo app brings your favorite social media content to your TV.  The sample code demonstrates a simple technique for integrating these technologies.

    Dual Screen Social Cast
    Figure 1:Social Cast screen shots - Left screen: setup screen, Right Screen: social media content rendered on the TV via Miracast*

    Miracast was released in Android 4.2 and, by using the Settings menu, users can easily discover, connect, and mirror their device’s screen on their TVs. This use case is interesting for sharing, but the exciting innovation in Android 4.4 (KitKat) is the Presentation API for ISVs to enable the TV as a second screen. Using open social media APIs from Facebook and Flickr, Social Cast demonstrates how to gather the latest content “feeds” and then reformat them for the TV. This turns your device into a content aggregator that provides a “lean back” experience.

    Intel has identified multitasking as a top user model in home living rooms, so we developed Social Cast around this experience.  By using a background task to pull in social content, format it, and maintain the Miracast connection, we free up the device screen for other apps to use. So users can casually watch their latest social media content and surf the web, text, or receive a call on their devices. The source code included in this article shows how to implement this architecture.

    Downloading and Installing the Sample

    The code sample is available for download. To build and deploy the application to your Android device:

    1. Create a new Eclipse* workspace.
    2. In Eclipse, right click inside Package Explorer and choose Import->General-> Existing Projects into Workspace.
    3. Go to code location and choose both projects (FacebookSDK and WIDIFBApp). Check "Copy projects into workspace”.
    4. Build.
    5.  If you see an error regarding an Android support library, right click the project containing the error in Package Explorer->Build Path-> Configure Build Path -> Libraries. Remove the Android support library (should have a red X on it) and then click Add External Jars and select the Android Support Library Jar (should be in default path—if not find it on computer).
    6. Build.

    Conclusion

    The combination of Miracast, social network APIs, and background tasks creates a rich media experience for end users in an innovative new user model. We encourage ISVs to write dual screen apps for all areas of app development from spectator point of view in gaming, to multi-camera angle video mosaics, to simple search/curate/share local content.

    Visit Intel's Dual Screen enabling page to learn more.

    Android Resources

    Why WiDi Miracast is a Game Changer for Android:  https://software.intel.com/en-us/blogs/2013/10/14/why-widi-miracast-is-a-game-changer-for-android

    Intel Android phone as TV Game Controller via WiDi / Miracasthttp://www.youtube.com/watch?v=on7Y1ex_98Y

    How to Enable Intel® Wireless Display Differentiation for Miracast* on Intel® Architecture Phonehttps://software.intel.com/en-us/articles/how-to-enable-intel-wireless-display-differentiation-for-miracast-on-intel-architecture

    HTML5 Resources

    W3G Working Group to Create a Dual Screen API that Abstracts the Connection Technology for HTML 5.0. The current draft and proposal are here:  https://www.w3.org/community/webscreens/wiki/API_Discussion

    Intel® WiDi Resources:  Intel® WiDi Apps - Dual Screen Apps for Android* and Windows*: https://software.intel.com/en-us/intel-widi

    Dual Screen Intel® WiDi Application: https://software.intel.com/sites/default/files/article/437858/dual-screen-wpf-widi-application.pdf

    Dual Screen Intel® WiDi Application: https://software.intel.com/en-us/articles/dual-screen-intel-widi-application

    Intel® WiDi Social Cast Demo Video: https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=23022

    Intel® WiDi Compatible Receivers: www.intel.com/go/widi

    About the Author

    Intel® WiDi Evangelist Steve Barile has been in the WiDi group since 2010, working with the Miracast wireless display standard from its inception at Intel. Since 2013, Steve has focused his time and expertise enabling the growing number of software vendors to develop dual-screen aware apps.

     

    Intel, the Intel logo, Ultrabook, Core, VTune, and vPro are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2014 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Social Cast
  • Dual Screen
  • Intel® WiDi
  • Miracast
  • Sviluppatori
  • Android*
  • Android*
  • Intermedio
  • Laptop
  • Tablet
  • URL
  • Area tema: 

    IDZone

    Debugging Intel® Xeon Phi™ Applications on Linux* Host

    $
    0
    0

    Contents

    Introduction

    Intel® Xeon Phi™ coprocessor is a product based on the Intel® Many Integrated Core Architecture (Intel® MIC). Intel® offers a debug solution for this architecture that can debug applications running on an Intel® Xeon Phi™ coprocessor.

    There are many reasons for the need of a debug solution for Intel® MIC. Some of the most important ones are the following:

    • Developing native Intel® MIC applications is as easy as for IA-32 or Intel® 64 hosts. In most cases they just need to be cross-compiled (-mmic).
      Yet, Intel® MIC Architecture is different to host architecture. Those differences could unveil existing issues. Also, incorrect tuning for Intel® MIC could introduce new issues (e.g. alignment of data, can an application handle more than hundreds of threads?, efficient memory consumption?, etc.)
    • Developing offload enabled applications induces more complexity as host and coprocessor share workload.
    • General lower level analysis, tracing execution paths, learning the instruction set of Intel® MIC Architecture, …

    Debug Solution for Intel® MIC

    For Linux* host, Intel offers a debug solution for Intel® MIC which is based on GNU* GDB. It can be used on the command line for both host and coprocessor. There is also an Eclipse* IDE integration that eases debugging of applications with hundreds of threads thanks to its user interface. It also supports debugging offload enabled applications.

    How to get it?

    There are currently two ways to obtain Intel’s debug solution for Intel® MIC Architecture on Linux* host:

    Both packages contain the same debug solutions for Intel® MIC Architecture!

    Why use GNU* GDB provided by Intel?

    • Capabilities are released back to GNU* community
    • Latest GNU* GDB versions in future releases
    • Improved C/C++ & Fortran support thanks to Project Archer and contribution through Intel
    • Increased support for Intel® architecture (esp. Intel® MIC)
    • Eclipse* IDE integration for C/C++ and Fortran
    • Additional debugging capabilities – more later

    Why is Intel providing a Command Line and Eclipse* IDE Integration?

    The command line with GNU* GDB has the following advantages:

    • Well known syntax
    • Lightweight: no dependencies
    • Easy setup: no project needs to be created
    • Fast for debugging hundreds of threads
    • Can be automatized/scripted

    Using the Eclipse* IDE provides more features:

    • Comfortable user interface
    • Most known IDE in the Linux* space
    • Use existing Eclipse* projects
    • Simple integration of the Intel enhanced GNU* GDB
    • Works also with Photran* plug-in to support Fortran
    • Supports debugging of offload enabled applications
      (not supported by command line)

    Deprecation Notice

    Intel® Debugger is deprecated (incl. Intel® MIC Architecture support):

    • Intel® Debugger for Intel® MIC Architecture was only available in Composer XE 2013 & 2013 SP1
    • Intel® Debugger is not part of Intel® Composer XE 2015 anymore

    Users are advised to use GNU* GDB that comes with Intel® Composer XE 2013 SP1 and later!

    You can provide feedback via either your Intel® Premier account (http://premier.intel.com) or via the Debug Solutions User Forum (http://software.intel.com/en-us/forums/debug-solutions/).

    Features

    Intel’s GNU* GDB, starting with version 7.5, provides additional extensions that are available on the command line:

    • Support for Intel® Many Integrated Core Architecture (Intel® MIC Architecture):
      Displays registers (zmmX & kX) and disassembles the instruction set
    • Support for Intel® Transactional Synchronization Extensions (Intel® TSX):
      Helpers for Restricted Transactional Memory (RTM) model
      (only for host)
    • Data Race Detection (pdbx):
      Detect and locate data races for applications threaded using POSIX* thread (pthread) or OpenMP* models
    • Branch Trace Store (btrace):
      Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
      (only for host)
    • Pointer Checker:
      Assist in finding pointer issues if compiled with Intel® C++ Compiler and having Pointer Checker feature enabled
      (only for host)
    • Register support for Intel® Memory Protection Extensions (Intel® MPX) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512):
      Debugger is already prepared for future generations

    The features for Intel® MIC highlighted above are described in the following.

    Register and Instruction Set Support

    Compared to Intel® architecture on host systems, Intel® MIC Architecture comes with a different instruction and register set. Intel’s GNU* GDB comes with transparently integrated support for those.  Use is no different than with host systems, e.g.:

    • Disassembling of instructions:
      
      		(gdb) disassemble $pc, +10
      
      		Dump of assembler code from 0x11 to 0x24:
      
      		0x0000000000000011 <foobar+17>: vpackstorelps %zmm0,-0x10(%rbp){%k1}
      
      		0x0000000000000018 <foobar+24>: vbroadcastss -0x10(%rbp),%zmm0
      
      		⁞
      
      		


      In the above example the first ten instructions are disassembled beginning at the instruction pointer ($pc). Only first two lines are shown for brevity. The first two instructions are Intel® MIC specific and their mnemonic is correctly shown.
       
    • Listing of mask (kX) and vector (zmmX) registers:
      
      		(gdb) info registers zmm
      
      		k0   0x0  0
      
      		     ⁞
      
      		zmm31 {v16_float = {0x0 <repeats 16 times>},
      
      		      v8_double = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0},
      
      		      v64_int8 = {0x0 <repeats 64 times>},
      
      		      v32_int16 = {0x0 <repeats 32 times>},
      
      		      v16_int32 = {0x0 <repeats 16 times>},
      
      		      v8_int64 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0},
      
      		      v4_uint128 = {0x0, 0x0, 0x0, 0x0}}
      
      		


      Also registers have been extended by kX (mask) and zmmX (vector) register sets that come with Intel® MIC.

    If you use the Eclipse* IDE integration you’ll get the same information in dedicated windows:

    • Disassembling of instructions:
      Eclipse* IDE Disassembly Window
    • Listing of mask (kX) and vector (zmmX) registers:
      Eclipse* IDE Register Window

    Data Race Detection

    A quick excursion about what data races are:

    • A data race happens…
      If at least two threads/tasks access the same memory location w/o synchronization and at least one thread/task is writing.
    • Example:
      Imaging the two functions thread1()& thread2() are executed concurrently by different threads.

      
      		int a = 1;
      
      		int b = 2;
      
      		                                         | t
      
      		int thread1() {      int thread2() {     | i
      
      		  return a + b;        b = 42;           | m
      
      		}                    }                   | e
      
      		                                         v
      
      		


      Return value of thread1() depends on timing: 3 vs. 43!
      This is one (trivial) example of a data race.

    What are typical symptoms of data races?

    • Data race symptoms:
      • Corrupted results
      • Run-to-run variations
      • Corrupted data ending in a crash
      • Non-deterministic behavior
    • Solution is to synchronize concurrent accesses, e.g.:
      • Thread-level ordering (global synchronization)
      • Instruction level ordering/visibility (atomics)
        Note:
        Race free but still not necessarily run-to-run reproducible results!
      • No synchronization: data races might be acceptable

    Intel’s GNU* GDB data race detection can help to analyze correctness.

    How to detect data races?

    • Prepare to detect data races:
      • Only supported with Intel® C++/Fortran Compiler (part of Intel® Composer XE):
        Compile with -debug parallel (icc, icpc or ifort)
        Only objects compiled with-debug parallel are analyzed!
      • Optionally, add debug information via –g
    • Enable data race detection (PDBX) in debugger:
      
      		(gdb) pdbx enable
      
      		(gdb) c
      
      		data race detected
      
      		1: write shared, 4 bytes from foo.c:36
      
      		3: read shared, 4 bytes from foo.c:40
      
      		Breakpoint -11, 0x401515 in L_test_..._21 () at foo.c:36
      
      		*var = 42; /* bp.write */
      
      		

    Data race detection requires an additional library libpdbx.so.5:

    • Keeps track of the synchronizations
    • Part of Intel® C++ & Fortran Compiler
    • Copy to coprocessor if missing
      (found at <composer_xe_root>/compiler/lib/mic/libpdbx.so)

    Supported parallel programming models:

    • OpenMP*
    • POSIX* threads

    Data race detection can be enabled/disabled at any time

    • Only memory access are analyzed within a certain period
    • Keeps memory footprint and run-time overhead minimal

    There is finer grained control for minimizing overhead and selecting code sections to analyze by using filter sets.

    More control about what to analyze with filters:

    • Add filter to selected filter set, e.g.:
      
      		(gdb) pdbx filter line foo.c:36
      
      		(gdb) pdbx filter code 0x40518..0x40524
      
      		(gdb) pdbx filter var shared
      
      		(gdb) pdbx filter data 0x60f48..0x60f50
      
      		(gdb) pdbx filter reads # read accesses
      
      		

      Those define various filter on either instructions by specifying source file and line or the addresses (range), or variables using symbol names or addresses (range) respectively. There is also a filter to only report accesses that use (read) data in case of a data race.
       
    • There are two basic configurations, that are exclusive:
       
      • Ignore events specified by filters (default behavior)
        
        				(gdb) pdbx fset suppress
        
        				
      • Ignore events not specified by filters
        
        				(gdb) pdbx fset focus
        
        				

        The first one defines a white list, whilst the latter one blacklists code or data sections that should not be analyzed.
         
    • Get debug command help
      
      		(gdb) help pdbx
      
      		

      This command will provide additional help on the commands.

    Use cases for filters:

    • Focused debugging, e.g. debug a single source file or only focus on one specific memory location.
    • Limit overhead and control false positives. Detection involves some runtime and memory overhead at runtime. The more filters narrow down the scope of analysis, the more the overhead will be reduced. This can also be used to exclude false positives. Those can occur if real data races are detected, but without any impact on application’s correctness by design (e.g. results of multiple threads don’t need to be globally stored in strict order).
    • Exclude 3rd party code for analysis

    Some additional hints using PDBX:

    • Optimized code (symptom):
      
      		(gdb) run
      
      		data race detected
      
      		1: write question, 4 bytes from foo.c:36
      
      		3: read question, 4 bytes from foo.c:40
      
      		Breakpoint -11, 0x401515 in foo () at foo.c:36
      
      		*answer = 42;
      
      		(gdb)
      
      		

       
    • Incident has to be analyzed further:
      • Remember: data races are reported on memory objects
      • If symbol name cannot be resolved: only address is printed
         
    • Recommendation:
      Unoptimized code (-O0) makes it easier to understand due to removed/optimized away temporaries, etc.
       
    • Reported data races appear to be false positives:
      • Not all data races are bad… user intended?
      • OpenMP*: Distinct parallel sections using the same variable (same stack frame) can result in false positives

    Note:
    PDBX is not available for Eclipse* IDE and will only work for remote debugging of native coprocessor applications. See section Debugging Remotely with PDBX for more information on how to use it.

    Debugging on Command Line

    There are multiple versions available:

    • Debug natively on Intel® Xeon Phi™ coprocessor
    • Execute GNU* GDB on host and debug remotely

    Debug natively on Intel® Xeon Phi™ coprocessor
    This version of Intel’s GNU* GDB runs natively on the coprocessor. It is included in Intel® MPSS only and needs to be made available on the coprocessor first in order to run it. Depending on the MPSS version it can be found at the provided location:

    • MPSS 2.1: /usr/linux-k1om-4.7/linux-k1om/usr/bin/gdb
    • MPSS 3.*: included in gdb-7.5+mpss3.*.k1om.rpm as part of package mpss-3.*-k1om.tar
      (for MPSS 3.1.2, please see Errata, for MPSS 3.1.4 use mpss-3.1.4-k1om-gdb.tar)

      For MPSS 3.* the coprocessor native GNU* GDB requires debug information from some system libraries for proper operation. Please see Errata for more information.

    Execute GNU* GDB on host and debug remotely
    There are two ways to start GNU* GDB on the host and debug remotely using GDBServer on the coprocessor:

    • Intel® MPSS:
      • MPSS 2.1: /usr/linux-k1om-4.7/bin/x86_64-k1om-linux-gdb
      • MPSS 3.*: <mpss_root>/sysroots/x86_64-mpsssdk-linux/usr/bin/k1om-mpss-linux/k1om-mpss-linux-gdb
      • GDBServer:
        /usr/linux-k1om-4.7/linux-k1om/usr/bin/gdbserver
        (same path for MPSS 2.1 & 3.*)
    • Intel® Composer XE:
      • Source environment to start GNU* GDB:
        
        				$ source debuggervars.[sh|csh]
        
        				$ gdb-mic
        
        				
      • GDBServer:
        <composer_xe_root>/debugger/gdb/target/mic/bin/gdbserver

    The sourcing of the debugger environment is only needed once. If you already sourced the according compilervars.[sh|csh] script you can omit this step and gdb-mic should already be in your default search paths.

    Attention: Do not mix GNU* GDB & GDBServer from different packages! Always use both from either Intel® MPSS or Intel® Composer XE!

    Debugging Natively

    1. Make sure GNU* GDB is already on the target by:
    • Copy manually, e.g.:
      
      		$ scp /usr/linux-k1om-4.7/linux-k1om/usr/bin/gdb mic0:/tmp
      
      		
    • Add to the coprocessor image (see Intel® MPSS documentation)
       
    1. Run GNU* GDB on the Intel® Xeon Phi™ coprocessor, e.g.:
      
      		$ ssh –t mic0 /tmp/gdb
      
      		

       
    2. Initiate debug session, e.g.:
    • Attach:
      
      		(gdb) attach <pid>

      <pid> is PID on the coprocessor
    • Load & execute:
      
      		(gdb) file <path_to_application>

      <path_to_application> is path on coprocessor

    Some additional hints:

    • If native application needs additional libraries:
      Set $LD_LIBRARY_PATH, e.g. via:
      
      		(gdb) set env LD_LIBRARY_PATH=/tmp/
      
      		

      …or set the variable before starting GDB
       
    • If source code is relocated, help the debugger to find it:
      
      		(gdb) set substitute-path <from> <to>

      Change paths from <from> to<to>. You can relocate a whole source (sub-)tree with that.

    Debugging is no different than on host thanks to a real Linux* environment on the coprocessor!

    Debugging Remotely

    1. Copy GDBServer to coprocessor, e.g.:
      
      		$ scp <composer_xe_root>/debugger/gdb/target/mic/bin/gdbserver mic0:/tmp

      During development you can also add GDBServer to your coprocessor image!
       
    2. Start GDB on host, e.g.:
      
      		$ source debuggervars.[sh|csh]
      
      		$ gdb-mic
      
      		


      Note:
      There is also a version named gdb-ia which is for IA-32/Intel® 64 only!
       
    3. Connect:
      
      		(gdb) target extended-remote | ssh -T mic0 /tmp/gdbserver --multi –
      
      		

       
    4. Set sysroot from MPSS installation, e.g.:
      
      		(gdb) set sysroot /opt/mpss/3.1.4/sysroots/k1om-mpss-linux/
      
      		

      If you do not specify this you won't get debugger support for system libraries.
       
    5. Debug:
    • Attach:
      
      		(gdb) file <path_to_application>
      
      		(gdb) attach <pid>

      <path_to_application> is path on host, <pid> is PID on the coprocessor
    • Load & execute:
      
      		(gdb) file <path_to_application>
      
      		(gdb) set remote exec-file <remote_path_to_application>

      <path_to_application> is path on host, <remote_path_to_application> is path on the coprocessor

    Some additional hints:

    • If remote application needs additional libraries:
      Set $LD_LIBRARY_PATH, e.g. via:
      
      		(gdb) target extended-remote | ssh mic0 LD_LIBRARY_PATH=/tmp/ /tmp/gdbserver --multi -
      
      		
    • If source code is relocated, help the debugger to find it:
      
      		(gdb) set substitute-path <from> <to>

      Change paths from <from> to <to>. You can relocate a whole source (sub-)tree with that.
       
    • If libraries have different paths on host & target, help the debugger to find them:
      
      		(gdb) set solib-search-path <lib_paths>

      <lib_paths> is a colon separated list of paths to look for libraries on the host

    Debugging is no different than on host thanks to a real Linux* environment on the coprocessor!

    Debugging Remotely with PDBX

    PDBX has some pre-requisites that must be fulfilled for proper operation. Use pdbx check command to see whether PDBX is working:

    1. First step:
      
      		(gdb) pdbx check
      
      		checking inferior...failed.
      
      		


      Solution:
      Start a remote application (inferior) and hit some breakpoint (e.g. b main& run)
       
    2. Second step:
      
      		(gdb) pdbx check
      
      		checking inferior...passed.
      
      		checking libpdbx...failed.
      
      		


      Solution:
      Use set solib-search-path <lib_paths> to provide the path of libpdbx.so.5 on the host.
       
    3. Third step:
      
      		(gdb) pdbx check
      
      		checking inferior...passed.
      
      		checking libpdbx...passed.
      
      		checking environment...failed.
      
      		


      Solution:
      Set additional environment variables on the target for OpenMP*. Those need to be set with starting GDBServer (similar to setting $LD_LIBRARY_PATH).
    • $INTEL_LIBITTNOTIFY32=""
    • $INTEL_LIBITTNOTIFY64=""
    • $INTEL_ITTNOTIFY_GROUPS=sync

    Debugging with Eclipse* IDE

    Intel offers an Eclipse* IDE debugger plug-in for Intel® MIC that has the following features:

    • Seamless debugging of host and coprocessor
    • Simultaneous view of host and coprocessor threads
    • Supports multiple coprocessor cards
    • Supports both C/C++ and Fortran
    • Support of offload extensions (auto-attach to offloaded code)
    • Support for Intel® Many Integrated Core Architecture (Intel® MIC Architecture): Registers & Disassembly

    Eclipse* IDE with Offload Debug Session

    The plug-in is part of both Intel® MPSS and Intel® Composer XE.

    Pre-requisites

    In order to use the provided plug-in the following pre-requisites have to be met:

    • Supported Eclipse* IDE version:
      • 4.2 with Eclipse C/C++ Development Tools (CDT) 8.1 or later
      • 3.8 with Eclipse C/C++ Development Tools (CDT) 8.1 or later
      • 3.7 with Eclipse C/C++ Development Tools (CDT) 8.0 or later

    We recommend: Eclipse* IDE for C/C++ Developers (4.2)

    • Java* Runtime Environment (JRE) 6.0 or later
    • For Fortran optionally Photran* plug-in
    • Remote System Explorer (aka. Target Management) to debug native coprocessor applications
    • Only for plug-in from Intel® Composer XE, source debuggervars.[sh|csh] for Eclipse* IDE environment!

    Install Intel® C++ Compiler plug-in (optional):
    Add plug-in via “Install New Software…”:
    Install Intel® C++ Compiler plug-in (optional)
    This Plug-in is part of Intel® Composer XE (<composer_xe_root>/eclipse_support/cdt8.0/). It adds Intel® C++ Compiler support which is not mandatory for debugging. For Fortran the counterpart is the Photran* plug-in. These plug-ins are recommended for the best experience.

    Note:
    Uncheck “Group items by category”, as the list will be empty otherwise!

    Install Plug-in for Offload Debugging

    Add plug-in via “Install New Software…”:
    Install Plug-in for Offload Debugging

    Plug-in is part of:

    • Intel® MPSS:
      • MPSS 2.1: <mpss_root>/eclipse_support/
      • MPSS 3.*: /usr/share/eclipse/mic_plugin/
    • Intel® Composer XE:<composer_xe_root>/debugger/cdt/

    Configure Offload Debugging

    • Create a new debug configuration for “C/C++ Application”
    • Click on “Select other…” and select MPM (DSF) Create Process Launcher:Configure Offload Debugging
      The “MPM (DSF) Create Process Launcher” needs to be used for our plug-in. Please note that this instruction is for both C/C++ and Fortran applications! Even though Photran* is installed and a “Fortran Local Application” entry is visible (not in the screenshot above!) don’t use it. It is not capable of using MPM.
       
    • In “Debugger” tab specify MPM script of Intel’s GNU* GDB:
      • Intel® MPSS:
        • MPSS 2.1: <mpss_root>/mpm/bin/start_mpm.sh
        • MPSS 3.*: /usr/bin/start_mpm.sh
          (for MPSS 3.1.1, 3.1.2 or 3.1.4, please see Errata)
      • Intel® Composer XE:
        <composer_xe_root>/debugger/mpm/bin/start_mpm.sh
        Configure Offload Debugging (Debugger)
        Here, you finally add Intel’s GNU* GDB for offload debugging (using MPM (DSF)). It is a script that takes care of setting up the full environment needed. No further configuration is required (e.g. which coprocessor cards, GDBServer & ports, IP addresses, etc.); it works fully automatic and transparent.

    Start Offload Debugging

    Debugging offload enabled applications is not much different than applications native for the host:

    • Create & build an executable with offload extensions (C/C++ or Fortran)
    • Don’t forget to add debug information (-g) and reduce optimization level if possible (-O0)
    • Start debug session:
      • Host & target debugger will work together seamlessly
      • All threads from host & target are shown and described
      • Debugging is same as used from Eclipse* IDE

    Eclipse* IDE with Offload Debug Session (Example)

    This is an example (Fortran) of what offload debugging looks like. On the left side we see host & mic0 threads running. One thread (11) from the coprocessor has hit the breakpoint we set inside the loop of the offloaded code. Run control (stepping, continuing, etc.), setting breakpoints, evaluating variables/memory, … work as they used to.

    Additional Requirements for Offload Debugging

    For debugging offload enabled applications additional environment variables need to be set:

    • Intel® MPSS 2.1:
      COI_SEP_DISABLE=FALSE
      MYO_WATCHDOG_MONITOR=-1

       
    • Intel® MPSS 3.*:
      AMPLXE_COI_DEBUG_SUPPORT=TRUE
      MYO_WATCHDOG_MONITOR=-1

    Set those variables before starting Eclipse* IDE!

    Those are currently needed but might become obsolete in the future. Please be aware that the debugger cannot and should not be used in combination with Intel® VTune™ Amplifier XE. Hence disabling SEP (as part of Intel® VTune™ Amplifier XE) is valid. The watchdog monitor must be disabled because a debugger can stop execution for an unspecified amount of time. Hence the system watchdog might assume that a debugged application, if not reacting anymore, is dead and will terminate it. For debugging we do not want that.

    Note:
    Do not set those variables for a production system!

    For Intel® MPSS 3.2 and later:
    MYO debug libraries are no longer installed with Intel MPSS 3.2 by default. This is a change from earlier Intel MPSS versions. Users must install the MYO debug libraries manually in order to debug MYO enabled applications using the Eclipse plug-in for offload debugging. For Intel MPSS 3.2 (and later) the MYO debug libraries can be found in the package mpss-myo-dbg-* which is included in the mpss-*.tar file.

    MPSS 3.2 and 3.2.1 do not support offload debugging with Intel® Composer XE 2013 SP1, please see Errata for more information!

    Configure Native Debugging

    Configure Remote System Explorer
    To debug native coprocessor applications we need to configure the Remote System Explorer (RSE).

    Note:
    Before you continue, make sure SSH works (e.g. via command line). You can also specify different credentials (user account) via RSE and save the password.

    The basic steps are quite simple:

    1. Show the Remote System window:
      Menu Window->Show View->Other…
      Select: Remote Systems->Remote Systems
       
    2. Add a new system node for each coprocessor:
      RSE Remote Systems Window
      Context menu in window Remote Systems: New Connection…
    • Select Linux, press Next>
    • Specify hostname of the coprocessor (e.g. mic0), press Next>
    • In the following dialogs select:
      • ssh.files
      • processes.shell.linux
      • ssh.shells
      • ssh.terminals

    Repeat this step for each coprocessor!

    Transfer GDBServer
    Transfer of the GDBServer to the coprocessor is required for remote debugging. We choose /tmp/gdberver as target on the coprocessor here (important for the following sections).

    Transfer the GDBServer to the coprocessor target, e.g.:

    
    	$ scp <composer_xe_root>/debugger/gdb/target/mic/bin/gdbserver mic0:/tmp

    During development you can also add GDBServer to your coprocessor image!

    Note:
    See section Debugging on Command Line above for the correct path of GDBServer, depending on the chosen package (Intel® MPSS or Intel® Composer XE)!

    Debug Configuration

    Eclipse* IDE Debug Configuration Window

    To create a new debug configuration for a native coprocessor application (here: native_c++) create a new one for C/C++ Remote Application.

    Set Connection to the coprocessor target configured with RSE before (here: mic0).

    Specify the remote path of the application, wherever it was copied to (here: /tmp/native_c++). We’ll address how to manually transfer files later.

    Set the flag for “Skip download to target path.” if you don’t want the debugger to upload the executable to the specified path. This can be meaningful if you have complex projects with external dependencies (e.g. libraries) and don’t want to manually transfer the binaries.
    (for MPSS 3.1.2 or 3.1.4, please see Errata)

    Note that we use C/C++ Remote Application here. This is also true for Fortran applications because there’s no remote debug configuration section provided by the Photran* plug-in!

    Eclipse* IDE Debug Configuration Window (Debugger)

    In Debugger tab, specify the provided Intel GNU* GDB for Intel® MIC (here: gdb-mic).

    Eclipse* IDE Debug Configuration Window (Debugger) -- Specify .gdbinit

    In the above example, set sysroot from MPSS installation in .gdbinit, e.g.:

    
    	set sysroot /opt/mpss/3.1.4/sysroots/k1om-mpss-linux/
    
    	

    You can use .gdbinit or any other command file that should be loaded before starting the debugging session. If you do not specify this you won't get debugger support for system libraries.

    Note:
    See section Debugging on Command Line above for the correct path of GDBServer, depending on the chosen package (Intel® MPSS or Intel® Composer XE)!

    Eclipse* IDE Debug Configuration Window (Debugger/GDBServer)

    In Debugger/Gdbserver Settings tab, specify the uploaded GDBServer (here: /tmp/gdbserver).

    Build Native Application for the Coprocessor

    Configuration depends on the installed plug-ins. For C/C++ applications we recommend to install the Intel® C++ Compiler XE plug-in that comes with Composer XE. For Fortran, install Photran* (3rd party) and select the Intel® Fortran Compiler manually.

    Make sure to use the debug configuration and provide options as if debugging on the host (-g). Optionally, disabling optimizations by –O0 can make the instruction flow comprehendible when debugging.

    The only difference compared to host builds is that you need to cross-compile for the coprocessor: Use the –mmic option, e.g.:
    Eclipse* IDE Project Properties

    After configuration, clean your build. This is needed because Eclipse* IDE might not notice all dependencies. And finally, build.

    Note:
    That the configuration dialog shown only exists for the Intel® C++ Compiler plug-in. For Fortran, users need to install the Photran* plug-in and switch the compiler/linker to ifort by hand plus adding -mmic manually. This has to be done for both the compiler & linker!

    Start Native Debugging

    Transfer the executable to the coprocessor, e.g.:

    • Copy manually  (e.g. via script on the terminal)
    • Use the Remote Systems window (RSE) to copy files from host and paste to coprocessor target (e.g. mic0):
      RSE Remote Systems Window (Copy)
      Select the files from the tree (Local Files) and paste them to where you want them on the target to be (e.g. mic0)
       
    • Use NFS to mirror builds to coprocessor (no need for update)
    • Use debugger to transfer (see earlier)

    Note:
    It is crucial that the executable can be executed on the coprocessor. In some cases the execution bits might not be set after copying.

    Start debugging using the C/C++ Remote Application created in the earlier steps. It should connect to the coprocessor target and launch the specified application via the GDBServer. Debugging is the same as for local/host applications.
    Native Debugging Session (Remote)

    Note:
    This works for coprocessor native Fortran applications the exact same way!

    Documentation

    More information can be found in the official documentation:

    • Intel® MPSS:
      • MPSS 2.1:
        <mpss_root>/docs/gdb/gdb.pdf
        <mpss_root>/eclipse_support/README-INTEL
      • MPSS 3.*:
        not available yet (please see Errata)
    • Intel® Composer XE:
      <composer_xe_root>/Documentation/[en_US|ja_JP]/debugger/gdb/gdb.pdf
      <composer_xe_root>/Documentation/[en_US|ja_JP]/debugger/gdb/eclmigdb_config_guide.pdf

    The PDF gdb.pdf is the original GNU* GDB manual for the base version Intel ships, extended by all features added. So, this is the place to get help for new commands, behavior, etc.
    README-INTEL from Intel® MPSS contains a short guide how to install and configure the Eclipse* IDE plug-in.
    PDF eclmigdb_config_guide.pdf provides an overall step-by-step guide how to debug with the command line and with Eclipse* IDE.

    Using Intel® C++ Compiler with the Eclipse* IDE on Linux*:
    http://software.intel.com/en-us/articles/intel-c-compiler-for-linux-using-intel-compilers-with-the-eclipse-ide-pdf/
    The knowledgebase article (Using Intel® C++ Compiler with the Eclipse* IDE on Linux*) is a step-by step guide how to install, configure and use the Intel® C++ Compiler with Eclipse* IDE.

    Errata

    • With the recent switch from MPSS 2.1 to 3.1 some packages might be incomplete or missing. Future updates will add improvements. Currently, documentation for GNU* GDB is missing.
       
    • For MPSS 3.1.2 and 3.1.4 the respective package mpss-3.1.[2|4]-k1om.tar is missing. It contains binaries for the coprocessor, like the native GNU* GDB for the coprocessor. It also contains /usr/libexec/sftp-server which is needed if you want to debug native applications on the coprocessor and require Eclipse* IDE to transfer the binary automatically. As this is missing you need to transfer the files manually (select “Skip download to target path.” in this case).
      As a workaround, you can use mpss-3.1.1-k1om.tar from MPSS 3.1.1 and install the binaries from there. If you use MPSS 3.1.4, the native GNU* GDB is available separately via mpss-3.1.4-k1om-gdb.tar.
       
    • With MPSS 3.1.1, 3.1.2 or 3.1.4 the script <mpss_root>/mpm/bin/start_mpm.sh uses an incorrect path to the MPSS root directory. Hence offload debugging is not working. You can fix this by creating a symlink for your MPSS root, e.g. for MPSS 3.1.2:

      $ ln -s /opt/mpss/3.1.2 /opt/mpss/3.1

      Newer versions of MPSS correct this. This workaround is not required if you use the start_mpm.sh script from the Intel(R) Composer XE package.
       
    • For MPSS 3.* the coprocessor native GNU* GDB requires debug information from some system libraries for proper opteration.
      Beginning with MPSS 3.1, debug information for system libraries is not installed on the coprocessor anymore. If the coprocessor native GNU* GDB is executed, it will fail when loading/continuing with a signal (SIGTRAP).
      Current workaround is to copy the .debug folders for the system libraries to the coprocessor, e.g.:

      $ scp -r /opt/mpss/3.1.2/sysroots/k1om-mpss-linux/lib64/.debug root@mic0:/lib64/
       
    • MPSS 3.2 and 3.2.1 do not support offload debugging with Intel® Composer XE 2013 SP1.
      Offload debugging with the Eclipse plug-in from Intel® Composer XE 2013 SP1 does not work with Intel MPSS 3.2 and 3.2.1. A configuration file which is required for operation by the Intel Composer XE 2013 SP1 package has been removed with Intel MPSS 3.2 and 3.2.1. Previous Intel MPSS versions are not affected. Intel MPSS 3.2.3 fixes this problem (there is no version of Intel MPSS 3.2.2!).
  • Intel(R) Xeon Phi(TM) Coprocessor
  • Debugger
  • GNU* GDB
  • Eclipse* IDE
  • Sviluppatori
  • Linux*
  • Server
  • C/C++
  • Fortran
  • Avanzato
  • Principiante
  • Intermedio
  • Architettura Intel® Many Integrated Core
  • Server
  • Desktop
  • URL
  • Per iniziare
  • Sviluppo multithread
  • Errori di threading
  • Area tema: 

    IDZone

    Debugging Intel® Xeon Phi™ Applications on Windows* Host

    $
    0
    0

    Contents

    Introduction

    Intel® Xeon Phi™ coprocessor is a product based on the Intel® Many Integrated Core Architecture (Intel® MIC). Intel® offers a debug solution for this architecture that can debug applications running on an Intel® Xeon Phi™ coprocessor.

    There are many reasons for the need of a debug solution for Intel® MIC. Some of the most important ones are the following:

    • Developing native Intel® MIC applications is as easy as for IA-32 or Intel® 64 hosts. In most cases they just need to be cross-compiled (/Qmic).
      Yet, Intel® MIC Architecture is different to host architecture. Those differences could unveil existing issues. Also, incorrect tuning for Intel® MIC could introduce new issues (e.g. alignment of data, can an application handle more than hundreds of threads?, efficient memory consumption?, etc.)
    • Developing offload enabled applications induces more complexity as host and coprocessor share workload.
    • General lower level analysis, tracing execution paths, learning the instruction set of Intel® MIC Architecture, …

    Debug Solution for Intel® MIC

    For Windows* host, Intel offers a debug solution, the Intel® Debugger Extension for Intel® MIC Architecture Applications. It supports debugging offload enabled application as well as native Intel® MIC applications running on the Intel® Xeon Phi™ coprocessor.

    How to get it?

    To obtain Intel’s debug solution for Intel® MIC Architecture on Windows* host, you need the following:

    Debug Solution as Integration

    Debug solution from Intel® based on GNU* GDB 7.5:

    • Full integration into Microsoft Visual Studio*, no command line version needed
    • Available with Intel® Composer XE 2013 SP1 and later


    Why integration into Microsoft Visual Studio*?

    • Microsoft Visual Studio* is established IDE on Windows* host
    • Integration reuses existing usability and features
    • Fortran support added with Intel® Fortran Composer XE

    Components Required

    The following components are required to develop and debug for Intel® MIC Architecture:

    • Intel® Xeon Phi™ coprocessor
    • Windows* Server 2008 RC2, Windows* 7 or later
    • Microsoft Visual Studio* 2012 or later
      Support for Microsoft Visual Studio* 2013 was added with Intel® Composer XE 2013 SP1 Update 1.
    • Intel® MPSS 3.1 or later
    • C/C++ development:
      Intel® C++ Composer XE 2013 SP1 for Windows* or later
    • Fortran development:
      Intel® Fortran Composer XE 2013 SP1 for Windows* or later

    Configure & Test

    It is crucial to make sure that the coprocessor setup is correctly working. Otherwise the debugger might not be fully functional.

    Setup Intel® MPSS:

    • Follow Intel® MPSS readme-windows.pdf for setup
    • Verify that the Intel® Xeon Phi™ coprocessor is running

    Before debugging applications with offload extensions:

    • Use official examples from:
      C:\Program Files (x86)\Intel\Composer XE 2013 SP1\Samples\en_US
    • Verify that offloading code works

    Prerequisite for Debugging

    Debugger integration for Intel® MIC Architecture only works when debug information is being available:

    • Compile in debug mode with at least the following option set:
      /Zi (compiler) and /DEBUG (linker)
    • Optional: Unoptimized code (/Od) makes debugging easier
      (due to removed/optimized away temporaries, etc.)
      Visual Studio* Project Properties (Debug Information &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp; Optimization)

    Applications can only be debugged in 64 bit

    • Set platform to x64
    • Verify that /MACHINE:x64 (linker) is set!
      Visual Studio* Project Properties (Machine)

    Debugging Applications with Offload Extension

    Start Microsoft Visual Studio* IDE and open or create an Intel® Xeon Phi™ project with offload extensions. Examples can be found in the Samples directory of Intel® Composer XE, that is:

    C:\Program Files (x86)\Intel\Composer XE 2013 SP1\Samples\en_US

    • C++\mic_samples.zip    or
    • Fortran\mic_samples.zip

    We’ll use intro_SampleC from the official C++ examples in the following.

    Compile the project with Intel® C++/Fortran Compiler.

    Characteristics of Debugging

    • Set breakpoints in code (during or before debug session):
      • In code mixed for host and coprocessor
      • Debugger integration automatically dispatches between host/coprocessor
    • Run control is the same as for native applications:
      • Run/Continue
      • Stop/Interrupt
      • etc.
    • Offloaded code stops execution (offloading thread) on host
    • Offloaded code is executed on coprocessor in another thread
    • IDE shows host/coprocessor information at the same time:
      • Breakpoints
      • Threads
      • Processes/Modules
      • etc.
    • Multiple coprocessors are supported:
      • Data shown is mixed:
        Keep in mind the different processes and address spaces
      • No further configuration needed:
        Debug as you go!

    Setting Breakpoints

    Debugging Applications with Offload Extension - Setting Breakpoints

    Note the mixed breakpoints here:
    The ones set in the normal code (not offloaded) apply to the host. Breakpoints on offloaded code apply to the respective coprocessor(s) only.
    The Breakpoints window shows all breakpoints (host & coprocessor(s)).

    Start Debugging

    Start debugging as usual via menu (shown) or <F5> key:
    Debugging Applications with Offload Extension - Start Debugging

    While debugging, continue till you reach a set breakpoint in offloaded code to debug the coprocessor code.

    Thread Information

    Debugging Applications with Offload Extension - Thread Information

    Information of host and coprocessor(s) is mixed. In the example above, the threads window shows two processes with their threads. One process comes from the host, which does the offload. The other one is the process hosting and executing the offloaded code, one for each coprocessor.

    Additional Requirements

    For debugging offload enabled applications additional environment variables need to be set:

    • Intel® MPSS 2.1:
      COI_SEP_DISABLE=FALSE
      MYO_WATCHDOG_MONITOR=-1

       
    • Intel® MPSS 3.*:
      AMPLXE_COI_DEBUG_SUPPORT=TRUE
      MYO_WATCHDOG_MONITOR=-1

    Set those variables before starting Visual Studio* IDE!

    Those are currently needed but might become obsolete in the future. Please be aware that the debugger cannot and should not be used in combination with Intel® VTune™ Amplifier XE. Hence disabling SEP (as part of Intel® VTune™ Amplifier XE) is valid. The watchdog monitor must be disabled because a debugger can stop execution for an unspecified amount of time. Hence the system watchdog might assume that a debugged application, if not reacting anymore, is dead and will terminate it. For debugging we do not want that.

    Note:
    Do not set those variables for a production system!

    Debugging Native Coprocessor Applications

    Pre-Requisites

    Create a native Intel® Xeon Phi™ application and transfer & execute the application to the coprocessor target:

    • Use micnativeloadex.exe provided by Intel® MPSS for an application C:\Temp\mic-examples\bin\myApp, e.g.:

      > "C:\Program Files\Intel\MPSS\sdk\coi\tools\micnativeloadex\micnativeloadex.exe""C:\Temp\mic-examples\bin\myApp" -d 0
       
    • Option –d 0 specifies the first device (zero based) in case there are multiple coprocessors per system
    • This application is executed directly after transfer

    Using micnativeloadex.exe also takes care about dependencies (i.e. libraries) and transfers them, too.

    Other ways to transfer and execute native applications are also possible (but more complex):

    • SSH/SCP
    • NFS
    • FTP
    • etc.

    Debugging native applications with Start Visual Studio* IDE is only possible via Attach to Process…:

    • micnativeloadex.exe has been used to transfer and execute the native application
    • Make sure the application waits till attached, e.g. by:
      
      		static int lockit = 1;
      
      		while(lockit) { sleep(1); }
      
      		
    • After having attached, set lockit to 0 and continue.
    • No Visual Studio* solution/project is required.

    Only one coprocessor at a time can be debugged this way.

    Configuration

    Open the options via TOOLS/Options… menu:Debugging Native Coprocessor Applications - Configuration

    It tells the debugger extension where to find the binary and sources. This needs to be changed every time a different coprocessor native application is being debugged.

    The entry solib-search-path directories works the same as for the analogous GNU* GDB command. It allows to map paths from the build system to the host system running the debugger.

    The entry Host Cache Directory is used for caching symbol files. It can speed up lookup for big sized applications.

    Attach

    Open the options via TOOLS/Attach to Process… menu:Debugging Native Coprocessor Applications - Attach to Process...

    Specify the Intel(R) Debugger Extension for Intel(R) MIC Architecture. Set the IP and port the GDBServer is running on; the default port of the GDB-Server is 2000, so use that.

    After a short delay the processes of the coprocessor card are listed. Select one to attach.

    Note:
    Checkbox Show processes from all users does not have a function for the coprocessor as user accounts cannot be mapped from host to target and vice versa (Linux* vs. Windows*).

  • Intel(R) Xeon Phi(TM) Coprocessor
  • Visual Studio
  • Debugger
  • Sviluppatori
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Yocto Project
  • Server
  • Windows*
  • C/C++
  • Fortran
  • Avanzato
  • Principiante
  • Intermedio
  • Architettura Intel® Many Integrated Core
  • Server
  • Desktop
  • URL
  • Argomenti sui compilatori
  • Per iniziare
  • Sviluppo multithread
  • Area tema: 

    IDZone

    Сборка модуля Android AOSP с помощью компилятора Intel® C++ для Android*

    $
    0
    0

    В этой статье описывается сборка модулей с помощью компилятора Intel C++ для Android* (ICC) и их интеграция в сборку проектов Android с открытым исходным кодом (AOSP). Модуль обычно представляет собой общую библиотеку или приложение, которое войдет в состав образа Android на устройстве. В качестве примера можно назвать аудио- и видеокодеки, приложения мультимедиа.

    Сборка отдельного модуля вне процесса сборки AOSP

    Сборка модуля Android в компиляторе Intel® C++ для Android* аналогична сборке с помощью набора инструментов GCC. Следуйте руководству Приступая к работе для компиляции модуля с помощью системы сборки NDK или автономной цепочки инструментов.

    Копирование заранее собранного модуля в образ AOSP в ходе процесса сборки

    После компиляции и компоновки модуль можно интегрировать в процесс сборки AOSP в виде заранее собранной библиотеки. Он будет включен в системный образ Android. 

    Создайте файл Android.mk внутри корневого каталога дерева AOSP со следующим содержимым:

    include $(CLEAR_VARS)
    LOCAL_MODULE := <library_name>
    LOCAL_MODULE_SUFFIX:=.so
    LOCAL_MODULE_TAGS := optional
    LOCAL_MODULE_CLASS := SHARED_LIBRARIES
    LOCAL_SRC_FILES := <library_file_name>
    include $(BUILD_PREBUILT)

    Местозаполнители <library_name> и <library_file_name> должны содержать имя файла библиотеки на хосте и в месте назначения. Обычно они одинаковы. 

    Сборка модуля в составе процесса сборки AOSP

    Предоставляемые корпорацией Intel версии дерева исходного кода AOSP должны быть уже подготовлены к использованию ICC в составе процесса сборки. Можно определить, поддерживается ли ICC в дереве исходного кода, проверив наличие файла <AOSP_ROOT>build/core/icc_config.mk exists. 

    Настройка среды сборки

    Перед использованием компилятора нужно настроить путь к компилятору в файле <AOSP_ROOT>build/core/icc_config.mk. Укажите путь к ICC, изменив переменную TARGET_ICC_TOOLS_PREFIX. К примеру:

    TARGET_ICC_TOOLS_PREFIX := /opt/intel/cc_android_14.0.1.017/bin/

    Здесь есть пара полезных для настройки переменных. Все упоминаемые переменные можно также указать напрямую в командной строке.

    • ICC_MODULES
      Принудительная компиляция указанных модулей с помощью ICC вне зависимости от компилятора по умолчанию. 
    • GCC_MODULES
      Принудительная компиляция указанных модулей с помощью GCC вне зависимости от компилятора по умолчанию.
    • ICC_IPO_MODULES
      Указывает модули, которые должны быть скомпилированы с межпроцедурной оптимизацией (IPO), если модуль компилируется с помощью ICC.
    • ICC_FREESTANDUNG_MODULES
      Указывает модули, не компонующиеся со стандартными библиотеками. Подробные сведения см. в документации. Этот параметр не следует изменять 

    Сборка модуля

    Если модуль, который следует компилировать в ICC, уже настроен с помощью  переменной ICC_MODULES, никаких дополнительных действий не требуется. Просто запустите компиляцию обычным образом:

    
    	source build/envsetup.sh
    lunch
    make flashfiles
    

    Также можно указать модуль, который следует компилировать в ICC, непосредственно в командной строке: 

    
    	source build/envsetup.sh
    lunch
    make ICC_MODULES=libskia ICC_STATIC_MODULES=libskia ICC_IPO_MODULES=libskia flashfiles
    

    Устранение неполадок

    Для компиляции модулей, входящих в состав AOSP, могут потребоваться дополнительные исправления. Обратитесь к представителю корпорации Intel при возникновении проблем.

  • Android
  • AOSP
  • Intel(R) C++ Compiler for Android*
  • Build Android Module
  • Android build process
  • Sviluppatori
  • Android*
  • Android*
  • C/C++
  • Avanzato
  • Intermedio
  • Telefono
  • Tablet
  • URL
  • Per iniziare
  • Area tema: 

    IDZone

    Intel® System Studio - Solutions, Tips and Tricks

    $
    0
    0
  • Sviluppatori
  • Android*
  • Tizen*
  • Unix*
  • Yocto Project
  • Android*
  • Tizen*
  • C/C++
  • Avanzato
  • Principiante
  • Intermedio
  • Intel® System Studio
  • Intel® Advanced Vector Extensions
  • Intel® Streaming SIMD Extensions
  • Telefono
  • Tablet
  • URL
  • Esempio di codice
  • Argomenti sui compilatori
  • Controllo degli errori
  • Per iniziare
  • Miglioramento delle prestazioni
  • Librerie
  • Errori di memoria
  • Sviluppo multithread
  • Static Security Analysis
  • Errori di threading
  • ISS-Learn
  • Learning Lab
  • Area tema: 

    IDZone

    Using the Build Tab

    $
    0
    0

    In the Intel® XDK development environment, once you have completed debugging and testing your app you can use the Build tab to make packages suitable for submitting to a variety of app stores. There are two types of app builds available:

    • Build a Mobile App creates a native app package suitable for submission to an app store for download and installation onto a mobile device.
    • Build a Web App creates an HTML5 package suitable for submitting to web app stores or for placement on a web server.

    Build a Mobile App

    You can choose from a variety of targets when building your app for a native operating system, including Google Android*, Apple iOS* and Windows 8*.

    Click the button corresponding to the target for which you wish to build. This connects you to the build server and uploads your project files to your account on the Intel XDK build server. Once there, the build server may request additional information to complete the build process. These additional screens typically require an app name, icon and splash screen images, and any certificates needed to sign the app. In some instances, you may be required to have a developer's license to complete the build process. Also, in some cases the build process may ask you to copy files and/or keys from the build server for use with app store submission.

    NOTE: The Intel XDK build service will NOT submit your packaged app to a store but, it does provide you with a package that is suitable for store submission. If you wish to submit your app to a store, you must do that outside of the Intel XDK.

    There are two basic hybrid HTML5 web app containers that can be built in the Mobile App section of build targets: standard Intel XDK containers and Apache Cordova* Beta containers.

    In addition, there are two unique hybrid HTML5 Web App containers: the Crosswalk* for Android container and the Tizen* container.

    All build targets create a hybrid HTML5 web app package that can be submitted to an app store and installed on a mobile device. The Crosswalk for Android container includes its own HTML5 runtime engine, based on the Crosswalk project (see Using Crosswalk). All other applications utilize the built-in webview (aka embedded browser) that is part of the target mobile device firmware to execute (render) your hybrid HTML5 web app. For example, Android packages use the Android browser webview built into the Android device, iOS packages use the Apple Safari* browser webview built into the iOS device, etc.

    Standard Intel XDK Container Builds

    The standard Intel XDK container builds are based on the original appMobi* hybrid HTML5 container and support the full Intel XDK API, the appMobi services API and the standard Cordova 2.9.0 API. These builds include all the targets listed in the Mobile App section of the build page except: the Crosswalk for Android, the Tizen and the Cordova for "*" targets.

    Details regarding how to use the standard Intel XDK containers are provided in the links below:

    NOTE: The Nook* and Amazon* build targets are minor variations of the standard Intel XDK Android build target; use the instructions for the Android build target as a guide for these two build targets.

    Beta Cordova Container Builds

    The Beta Cordova build targets are based on the standard Cordova CLI 3.x build system. These build targets are compatible with standard Cordova (aka Adobe PhoneGap*) build systems and support the core Cordova 3.x APIs; in addition, this build system also supports a subset of the Intel XDK APIs, via a set of custom intel.xdk Cordova plugins. These build targets do not support the appMobi services APIs.

    At this time, these targets require a hand-built intelxdk.config.xml file be included in your project to direct the build process and specify the plugins (APIs) required by your application. Please start with Using the Intel XDK “Cordova for *” Build Option for details on how to use these build targets.

    NOTE: The "Windows 8" build refers to applications targeting the Windows 8 "Modern UI" environment. The build named "Windows 8 Phone" targets Windows Phone devices.

    In addition to the overview referenced above, please see these two articles for information regarding the plugin names and the APIs they provide access to when you include them:

    Crosswalk for Android Container Build

    Information on how to build apps for the Crosswalk for Android container are described in Using the Intel XDK “Crosswalk for Android” Build Option. Like the Cordova build targets, this build target also supports the core Cordova 3.x APIs and a subset of the Intel XDK APIs; it does not support the appMobi services API.

    Unlike Cordova and standard Intel XDK built apps, the Crosswalk container includes a custom webview for executing your hybrid HTML5 application. For this reason it is significantly larger than packages built using the other build targets. This build option creates two processor specific packages (an x86 APK for use on x86 Android devices and an ARM APK for ARM Android devices). Follow the instructions in Submitting Multiple Crosswalk APKs to the Google Play Store* to insure that your Crosswalk application is available to the widest consumer audience. The store automatically delivers the appropriate APK to your customer's device.

    For technical and legal reasons the Crosswalk runtime engine is only available on Android 4.x devices.

    For details regarding which Intel XDK APIs are supported in the Crosswalk webview, and which plugins must be selected to insure your application has access to the appropriate APIs, please see the references in the "Cordova Container Builds" section, above.

    Tizen Container Build

    See this tutorial for help if you wish to build a Tizen* application.

    NOTE: HTML5 application packages are standard ZIP files. If you are curious as to what is added to your app when a build is performed, you can "unzip" the app package and inspect its contents. However, if you modify the contents you will invalidate the application package signature; modifying a package after it has been built is not advised.

    Build a Web App

    There are fewer web app targets because there is less overhead required to put an HTML5 app on the web. These are mostly convenience builds that add manifest files and, in some cases, support libraries to your app package. You can "unzip" these packages to see what has been added and to better understand what these builds do.

    Click the button for your target of choice. This connects you to the build server and uploads your project to your account in the Intel XDK build server. Once the build server has your code, it will request any additional information needed to complete the process and add the internal infrastructure necessary to host your code as a web app on the selected platform.

    See this tutorial for information about building a Google Chrome web application.

    Legal Information
    329657-002US


    Forward Clustered Shading

    $
    0
    0

    This sample demonstrates Forward Clustered Shading, a recently proposed light culling method that allows the convenience of forward rendering, requires a single geometry pass, and efficiently handles high light counts. Special care has been taken to minimize CPU usage through low level optimizations and by leveraging ISPC. Various rendering techniques are available in the sample to compare and contrast performance (see https://software.intel.com/en-us/articles/deferred-rendering-for-current-and-future-rendering-pipelines).

  • Sviluppatori
  • Partner
  • Professori
  • Studenti
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • C/C++
  • Avanzato
  • Intermedio
  • Microsoft DirectX*
  • OpenGL*
  • Laptop
  • Tablet
  • Desktop
  • Contratto di licenza: 

  • URL
  • Esempio di codice
  • Area tema: 

    IDZone
  • Android*
  • Ultimo aggiornamento: 

    Martedì, 5 Agosto, 2014

    Improving Performance with MPI-3 Non-Blocking Collectives

    $
    0
    0

    The new MPI-3 non-blocking collectives offer potential improvements to application performance.  These gains can be significant for the right application.  But for some applications, you could end up lowering your performance by adding non-blocking collectives.  I'm going to discuss what the non-blocking collectives are and show a kernel which can benefit from using MPI_Iallreduce.

    What are MPI-3 non-blocking collectives?

    Non-blocking collectives are new versions of collective functions that can return immediately to your application code.  These versions can perform the collective operation in the background (as long as your MPI implementation supports this) while your application performs other work.  If your application is structured such that you can begin a collective operation, perform some local work, and get the results from that collective operation later, then your application might benefit from using non-blocking collectives.

    When to use non-blocking collectives?

    In order to see a benefit to non-blocking collectives, your application must be able to do enough work between when the collective begins and when the collective must be completed to offset the additional overhead of checking for collective completion.  Larger message sizes will typically require more computation to offset moving the data to a communication buffer.  If you have a small message size relative to the overlapping computation, you could see benefit.

    Additionally, you must have sufficient system resources available.  If you are already using all available system resources, then the MPI implementation cannot run the communication in parallel with your computation, and you will see no benefit, with possible performance degradation.

    How to identify potential improvements?

    There are several steps to identifying how to improve your application's performance using non-blocking collectives.  The first step is to determine how much of your application's total time is spent in collectives.  If you have very little time spent in collectives, there is very little of the overall application available for improving, and switching to non-blocking collectives likely isn't worth the investment.  You can look at the Summary Page in Intel® Trace Analyzer to quickly see if any of your top functions are collectives.

    Once you have determined that there is sufficient time in collectives, you need to check if your application's workflow allows for non-blocking collectives.  For example, if you calculate a dataset, immediately use that in a collective, and use the collective results immediately following the collective, you will need to rework your application flow before you can see any benefit to non-blocking collectives.  But if you can calculate the dataset early and pass it into the collective as soon as it is calculated, then don't need to use it until later, you have a potential for non-blocking collectives.

    Example usage with MPI_Iallreduce

    Let's assume a code kernel with 3 arrays, each distributed across multiple ranks.  The kernel gets the average of the first array and uses it to modify the second array.  The minimum and maximum values in the second array are found, and used to modify the third array, along with the sum of the first array.  The third array is then reduced to a single sum across all ranks.  Pseudo-code:

    MPI_Allreduce(A1,sumA1temp,MPI_SUM)
    avgA1=sum(sumA1temp(:))/(elements*ranks)
    A2(:)=A2(:)*avgA1
    MPI_Allreduce(A2,minA2temp,MPI_MIN)
    MPI_Allreduce(A2,maxA2temp,MPI_MAX)
    A3(:)=A3(:)+avgA1
    minA2=minval(minA2temp(:))
    maxA2=maxval(maxA2temp(:))
    A3(:)=A3(:)*(minA2+maxA2)*0.5
    MPI_Allreduce(A3,sumA3temp,MPI_SUM)
    finalsum=sum(sumA3temp(:))

    This kernel could gain performance by switching from MPI_Allreduce to MPI_Iallreduce for the minimum and maximum reductions on the second array.  Pseudo-code:

    MPI_Allreduce(A1,sumA1temp,MPI_SUM)
    avgA1=sum(sumA1temp(:))/(elements*size)
    A2(:)=A2(:)*avgA1
    MPI_Iallreduce(A2,minA2temp,MPI_MIN,req2min)
    MPI_Iallreduce(A2,maxA2temp,MPI_MAX,req2max)
    A3(:)=A3(:)+avgA1
    MPI_Wait(req2min)
    minA2=minval(minA2temp(:))
    MPI_Wait(req2max)
    maxA2=maxval(maxA2temp(:))
    A3(:)=A3(:)*(minA2+maxA2)*0.5
    MPI_Allreduce(A3,sumA3temp,MPI_SUM)
    finalsum=sum(sumA3temp(:))

    Exact improvements vary based on many factors, but could be over 50% reduction in kernel runtime.  In a test on a dual-socket system with Intel® Xeon™ E5-2697 v2 processors, running 12 ranks (all using shared memory for communications) with the Intel® MPI Library Version 5.0, the non-blocking version took 54% less time to complete the kernel with a 10000 element array of randomly generated doubles.  This is because the collective is able to overlap the communication, allowing increased parallelism in the application as a whole.

  • mpi-3
  • non-blocking collectives
  • Sviluppatori
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Server
  • Intermedio
  • Intel® Trace Analyzer and Collector
  • Intel® MPI Library
  • Message Passing Interface
  • Elaborazione basata su cluster
  • Ottimizzazione
  • Elaborazione parallela
  • Server
  • URL
  • Esempio di codice
  • Miglioramento delle prestazioni
  • Librerie
  • Area tema: 

    IDZone

    Code Generation for future Intel® MIC Architecture-based Processors

    $
    0
    0

    In versions 14.0.1 and later, the Intel® C++ Compiler and the Intel® Fortran Compiler processor-specific options support code generation for Intel® Advanced Vector Extensions 512 (Intel® AVX-512) targeting the next generation of Intel® MIC Architecture-based products, codenamed "Knights Landing".

    Code generation targeting this platform can be achieved through the use of the following options:

    For additional information how to use these options, please refer to the article Intel® Compiler Options for Intel® SSE and Intel® AVX generation and Processor-specific Optimizations.

     

  • Sviluppatori
  • Linux*
  • C/C++
  • Fortran
  • Intermedio
  • Compilatore C++ Intel®
  • Compilatore Fortran Intel®
  • Architettura Intel® Many Integrated Core
  • Ottimizzazione
  • Vettorizzazione
  • URL
  • Argomenti sui compilatori
  • Area tema: 

    IDZone

    新版英特尔®XDK为您带来了哪些新特性?

    $
    0
    0

    现在开发者可以简便地整合第三方服务的API来使应用变现或整合后端服务来创建内容更加丰富多彩的应用。英特尔®XDK使开发者能够便捷地在应用中加入上百种开源的第三方Cordova*插件,以及Android*, iOS*, Window 8*平台上的各种专有插件。


    新建项目

    一体式工作流,让你的App从创意迸发到打包发布一气呵成:

    • 多合一的解决方案

    • 使用多种方式开始创建你的App


    项目管理视图

    一个页面轻松管理你的所有项目:

    • 使用更多方式配置你的项目属性,自定义项目信息

    • 使用丰富的Cordova*插件来配置你的项目

    • 同时管理多个项目

     

    API和插件管理

    高效地接入和管理API:

    • 提供所有操作系统平台上的Cordova* 3.5基础容器,包括上百种Cordova* API和Intel XDK API

    • 提供对Cordova* 3.5,PhoneGap*和Github* repo等第三方插件的支持

    • 允许自定义的第三方插件

     

    新的Services Tab, 成就精彩的App体验

    利用应用变现和广告等多种服务创建内容丰富的应用:

    • Google*和Apple*的应用内支付等应用变现服务

    • 使用第三方插件的Google* AdMob, Facebook*, 和Urban* Airship

    • Google* Analysis,以及使用OAuth* 2协议的Dropbox*文件存储和Foursquare*

    • Kinvey* 数据存储

    • App Designer内置的Google*地图插件

     

    代码实时预览

    用更简便的应用开发方式节省你的时间:

    • 将代码改动实时呈现在浏览器或设备

    • 利用App Preview 实时测试和编辑你的应用

    • 允许开发者即时看到代码改动

     

    丰富的应用商店

    一次开发,多个商店同时发布:

    • 打包应用并提交到更多应用商店

    • 为多种设备形式部署你的应用

    • 新增Firefox*应用商店

     

    欲下载最新版XDK,请前往 http://xdk.intel.com

    您可关注英特尔XDK官方微博 http://weibo.com/xdktool 即时了解我们的信息,或扫一扫下面的二维码关注我们的微信账号:

  • html5
  • Sviluppatori
  • Sviluppatori Intel AppUp®
  • Partner
  • Professori
  • Studenti
  • Android*
  • Apple iOS*
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Android*
  • HTML5
  • Internet degli oggetti
  • Esperienza utente
  • Windows*
  • HTML5
  • JavaScript*
  • Avanzato
  • Principiante
  • Intermedio
  • Intel® XDK
  • Strumenti di sviluppo
  • Esperienza utente e progettazione
  • Laptop
  • Tablet
  • Desktop
  • URL
  • Area tema: 

    IDZone

    Adding AdMob* by Google* to Your Cordova* Application

    $
    0
    0

    If you want to include AdMob* (by Google*) advertisements as part of your HTML5 hybrid mobile app, you will need to use a Cordova plugin. Unlike the desktop browser solution, mobile apps require a native code component to retrieve and display ads on mobile devices. Not all mobile ad services have this restriction, but if you want to serve AdMob advertisements in your app you will have to use a Cordova plugin.

    There are a variety of Cordova plugins available for serving ads; some serve ads from third-party sources, a few serve ads from the AdMob network. You are not required to use AdMob to serve ads, but onlyAdMob plugins are described in this article.

    At the time this article was written, there were three popular AdMob plugins available (in no particular order):

    Additional advertising plugins can be found by searching the Cordova Plugins Registry or PlugReg (an independent Cordova plugin registry), or simply by doing a general search of the web for "mobile ad services."

    Details regarding how to use the AdMob system as a means of monetization are available in the AdMob support pages.

    Before you can serve any AdMob ads you must sign up for an AdMob account at www.admob.com. There is no cost associated with creating an account, or for serving up AdMob ads within your mobile app. If you already have an AdMob account all you need to do, to use the AdMob plugin, is create the appropriate Ad Unit IDs that identify your ad impressions and provide them as part of the AdMob API initialization sequence within your app. A screenshot of the online AdMob tool you use to create the Ad Unit ID is shown below.

    IMPORTANT: each application should have its own set of Ad Unit IDs! If you do not yet have an app in an app store, you can use the "manual" method to identify your app for the purpose of obtaining Ad Unit IDs.

    This very simple example, verified on Android and iOS, uses the "floatinghotpot" plugin, and can be easily cloned and opened as a project in the Intel XDK. It shows you the basic setup to get a banner ad running in your app. At the time this article was written, this plugin did not work with Crosswalk. The plugin described below has been made to work on Crosswalk builds.

    The "gooogle" AdMob plugin github repo includes several examples that can help you learn how to include ads in your app. The simplest example is an index.html file located in the PhoneGap plugin repo, it is a single file app. If you want to create an example based on this sample in the Intel XDK, follow these steps:

    1. Go to the Projects tab.
    2. Select "Start a New Project" at the lower left of the screen.
    3. Choose "Start with a Blank Project."
    4. Replace the default index.html file in your new project with the contents of the example referenced above.
    5. Plug your Ad Unit IDs (one for a "banner" ad and one for an "interstitial" ad) into the appropriate places in the sample code and save the index.html file.

    Finally, go to the Projects tab and use the "Get Plugin from the Web" on the "Third-Party Plugins" panel. See the screenshot at the very end of this article and Adding Plugins to Your Intel® XDK Cordova App for more details about using plugins in your Cordova apps.

    NOTE: because your test application includes a third-party plugin, it will only run on a real device. You must use the Build tab to create an APK (for Android) or IPA (for iOS) to actually run your application. If you attempt to run this app using the Emulate, Test or Debug tabs the AdMob APIs will fail.

    Because this app uses a Cordova plugin, it can only be built using the Cordova build targets. Attempting to use the "legacy" build targets will not work.

    Viewing all 664 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>