Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-getting-started-zeromq
Packt
04 Apr 2013
5 min read
Save for later

Getting Started with ZeroMQ

Packt
04 Apr 2013
5 min read
(For more resources related to this topic, see here.) The message queue A message queue, or technically a FIFO (First In First Out) queue is a fundamental and well-studied data structure. There are different queue implementations such as priority queues or double-ended queues that have different features, but the general idea is that the data is added in a queue and fetched when the data or the caller is ready. Imagine we are using a basic in-memory queue. In case of an issue, such as power outage or a hardware failure, the entire queue could be lost. Hence, another program that expects to receive a message will not receive any messages. However, adopting a message queue guarantees that messages will be delivered to the destination no matter what happens. Message queuing enables asynchronous communication between loosely-coupled components and also provides solid queuing consistency. In case of insufficient resources, which prevent you from immediately processing the data that is sent, you can queue them up in the message queue server that would store the data until the destination is ready to accept the messages. Message queuing has an important role in large-scaled distributed systems and enables asynchronous communication. Let's have a quick overview on the difference between synchronous and asynchronous systems. In ordinary synchronous systems, tasks are processed one at a time. A task is not processed until the task in-process is finished. This is the simplest way to get the job done. Synchronous system We could also implement this system with threads. In this case threads process each task in parallel. Threaded synchronous system In the threading model, threads are managed by the operating system itself on a single processor or multiple processors/cores. Asynchronous Input/Output (AIO) allows a program to continue its execution while processing input/output requests. AIO is mandatory in real-time applications. By using AIO, we could map several tasks to a single thread. Asynchronous system The traditional way of programming is to start a process and wait for it to complete. The downside of this approach is that it blocks the execution of the program while there is a task in progress. However, AIO has a different approach. In AIO, a task that does not depend on the process can still continue. You may wonder why you would use message queue instead of handling all processes with a single-threaded queue approach or multi-threaded queue approach. Let's consider a scenario where you have a web application similar to Google Images in which you let users type some URLs. Once they submit the form, your application fetches all the images from the given URLs. However: If you use a single-threaded queue, your application would not be able to process all the given URLs if there are too many users If you use a multi-threaded queue approach, your application would be vulnerable to a distributed denial of service attack (DDoS) You would lose all the given URLs in case of a hardware failure In this scenario, you know that you need to add the given URLs into a queue and process them. So, you would need a message queuing system. Introduction to ZeroMQ Until now we have covered what a message queue is, which brings us to the purpose of this article, that is, ZeroMQ. The community identifies ZeroMQ as "sockets on steroids". The formal definition of ZeroMQ is it is a messaging library that helps developers to design distributed and concurrent applications. The first thing we need to know about ZeroMQ is that it is not a traditional message queuing system, such as ActiveMQ, WebSphereMQ, or RabbitMQ. ZeroMQ is different. It gives us the tools to build our own message queuing system. It is a library. It runs on different architectures from ARM to Itanium, and has support for more than 20 programming languages. Simplicity ZeroMQ is simple. We can do some asynchronous I/O operations and ZeroMQ could queue the message in an I/O thread. ZeroMQ I/O threads are asynchronous when handling network traffic, so it can do the rest of the job for us. If you have worked on sockets before, you will know that it is quite painful to work on. However, ZeroMQ makes it easy to work on sockets. Performance ZeroMQ is fast. The website Second Life managed to get 13.4 microseconds end-to-end latencies and up to 4,100,000 messages per second. ZeroMQ can use multicast transport protocol, which is an efficient method to transmit data to multiple destinations. The brokerless design Unlike other traditional message queuing systems, ZeroMQ is brokerless. In traditional message queuing systems, there is a central message server (broker) in the middle of the network and every node is connected to this central node, and each node communicates with other nodes via the central broker. They do not directly communicate with each other. However, ZeroMQ is brokerless. In a brokerless design, applications can directly communicate with each other without any broker in the middle. ZeroMQ does not store messages on disk. Please do not even think about it. However, it is possible to use a local swap file to store messages if you set zmq.SWAP. Summary This article explained what a message queuing system is, discussed the importance of message queuing, and introduced ZeroMQ to the reader. Resources for Article : Further resources on this subject: RESTful Web Service Implementation with RESTEasy [Article] BizTalk Server: Standard Message Exchange Patterns and Types of Service [Article] AJAX Chat Implementation: Part 1 [Article]  
Read more
  • 0
  • 0
  • 5084

article-image-working-webstart-and-browser-plugin
Packt
06 Feb 2015
12 min read
Save for later

Working with WebStart and the Browser Plugin

Packt
06 Feb 2015
12 min read
 In this article by Alex Kasko, Stanislav Kobyl yanskiy, and Alexey Mironchenko, authors of the book OpenJDK Cookbook, we will cover the following topics: Building the IcedTea browser plugin on Linux Using the IcedTea Java WebStart implementation on Linux Preparing the IcedTea Java WebStart implementation for Mac OS X Preparing the IcedTea Java WebStart implementation for Windows Introduction For a long time, for end users, the Java applets technology was the face of the whole Java world. For a lot of non-developers, the word Java itself is a synonym for the Java browser plugin that allows running Java applets inside web browsers. The Java WebStart technology is similar to the Java browser plugin but runs remotely on loaded Java applications as separate applications outside of web browsers. The OpenJDK open source project does not contain the implementations for the browser plugin nor for the WebStart technologies. The Oracle Java distribution, otherwise matching closely to OpenJDK codebases, provided its own closed source implementation for these technologies. The IcedTea-Web project contains free and open source implementations of the browser plugin and WebStart technologies. The IcedTea-Web browser plugin supports only GNU/Linux operating systems and the WebStart implementation is cross-platform. While the IcedTea implementation of WebStart is well-tested and production-ready, it has numerous incompatibilities with the Oracle WebStart implementation. These differences can be seen as corner cases; some of them are: Different behavior when parsing not well-formed JNLP descriptor files: The Oracle implementation is generally more lenient for malformed descriptors. Differences in JAR (re)downloading and caching behavior: The Oracle implementation uses caching more aggressively. Differences in sound support: This is due to differences in sound support between Oracle Java and IcedTea on Linux. Linux historically has multiple different sound providers (ALSA, PulseAudio, and so on) and IcedTea has more wide support for different providers, which can lead to sound misconfiguration. The IcedTea-Web browser plugin (as it is built on WebStart) has these incompatibilities too. On top of them, it can have more incompatibilities in relation to browser integration. User interface forms and general browser-related operations such as access from/to JavaScript code should work fine with both implementations. But historically, the browser plugin was widely used for security-critical applications like online bank clients. Such applications usually require security facilities from browsers, such as access to certificate stores or hardware crypto-devices that can differ from browser to browser, depending on the OS (for example, supports only Windows), browser version, Java version, and so on. Because of that, many real-world applications can have problems running the IcedTea-Web browser plugin on Linux. Both WebStart and the browser plugin are built on the idea of downloading (possibly untrusted) code from remote locations, and proper privilege checking and sandboxed execution of that code is a notoriously complex task. Usually reported security issues in the Oracle browser plugin (most widely known are issues during the year 2012) are also fixed separately in IcedTea-Web. Building the IcedTea browser plugin on Linux The IcedTea-Web project is not inherently cross-platform; it is developed on Linux and for Linux, and so it can be built quite easily on popular Linux distributions. The two main parts of it (stored in corresponding directories in the source code repository) are netx and plugin. NetX is a pure Java implementation of the WebStart technology. We will look at it more thoroughly in the following recipes of this article. Plugin is an implementation of the browser plugin using the NPAPI plugin architecture that is supported by multiple browsers. Plugin is written partly in Java and partly in native code (C++), and it officially supports only Linux-based operating systems. There exists an opinion about NPAPI that this architecture is dated, overcomplicated, and insecure, and that modern web browsers have enough built-in capabilities to not require external plugins. And browsers have gradually reduced support for NPAPI. Despite that, at the time of writing this book, the IcedTea-Web browser plugin worked on all major Linux browsers (Firefox and derivatives, Chromium and derivatives, and Konqueror). We will build the IcedTea-Web browser plugin from sources using Ubuntu 12.04 LTS amd64. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build the IcedTea-Web browser plugin: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Install the specific dependency for the browser plugin: sudo apt-get install firefox-dev Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up the build environment: ./configure Run the build process: make Install the newly built plugin into the /usr/local directory: sudo make install Configure the Firefox web browser to use the newly built plugin library: mkdir ~/.mozilla/plugins cd ~/.mozilla/plugins ln -s /usr/local/IcedTeaPlugin.so libjavaplugin.so Check whether the IcedTea-Web plugin has appeared under Tools | Add-ons | Plugins. Open the http://java.com/en/download/installed.jsp web page to verify that the browser plugin works. How it works... The IcedTea browser plugin requires the IcedTea Java implementation to be compiled successfully. The prepackaged OpenJDK 7 binaries in Ubuntu 12.04 are based on IcedTea, so we installed them first. The plugin uses the GNU Autconf build system that is common between free software tools. The xulrunner-dev package is required to access the NPAPI headers. The built plugin may be installed into Firefox for the current user only without requiring administrator privileges. For that, we created a symbolic link to our plugin in the place where Firefox expects to find the libjavaplugin.so plugin library. There's more... The plugin can also be installed into other browsers with NPAPI support, but installation instructions can be different for different browsers and different Linux distributions. As the NPAPI architecture does not depend on the operating system, in theory, a plugin can be built for non-Linux operating systems. But currently, no such ports are planned. Using the IcedTea Java WebStart implementation on Linux On the Java platform, the JVM needs to perform the class load process for each class it wants to use. This process is opaque for the JVM and actual bytecode for loaded classes may come from one of many sources. For example, this method allows the Java Applet classes to be loaded from a remote server to the Java process inside the web browser. Remote class loading also may be used to run remotely loaded Java applications in standalone mode without integration with the web browser. This technique is called Java WebStart and was developed under Java Specification Request (JSR) number 56. To run the Java application remotely, WebStart requires an application descriptor file that should be written using the Java Network Launching Protocol (JNLP) syntax. This file is used to define the remote server to load the application form along with some metainformation. The WebStart application may be launched from the web page by clicking on the JNLP link, or without the web browser using the JNLP file obtained beforehand. In either case, running the application is completely separate from the web browser, but uses a sandboxed security model similar to Java Applets. The OpenJDK project does not contain the WebStart implementation; the Oracle Java distribution provides its own closed-source WebStart implementation. The open source WebStart implementation exists as part of the IcedTea-Web project. It was initially based on the NETwork eXecute (NetX) project. Contrary to the Applet technology, WebStart does not require any web browser integration. This allowed developers to implement the NetX module using pure Java without native code. For integration with Linux-based operating systems, IcedTea-Web implements the javaws command as shell script that launches the netx.jar file with proper arguments. In this recipe, we will build the NetX module from the official IcedTea-Web source tarball. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build a NetX module: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up a build environment excluding the browser plugin from the build: ./configure –disable-plugin Run the build process: make Install the newly-built plugin into the /usr/local directory: sudo make install Run the WebStart application example from the Java tutorial: javaws http://docs.oracle.com/javase/tutorialJWS/samples/ deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp How it works... The javaws shell script is installed into the /usr/local/* directory. When launched with a path or a link to the JNLP file, javaws launches the netx.jar file, adding it to the boot classpath (for security reasons) and providing the JNLP link as an argument. Preparing the IcedTea Java WebStart implementation for Mac OS X The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Mac OS X. IcedTea-Web provides the javaws launcher implementation only for Linux-based operating systems. In this recipe, we will create a simple implementation of the WebStart launcher script for Mac OS X. Getting ready For this recipe, we will need Mac OS X Lion with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe. How to do it... The following procedure will help you to run WebStart applications on Mac OS X: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The next.jar file contains a Java application that can read JNLP files and download and run classes described in JNLP. But for security reasons, next.jar cannot be launched directly as an application (using the java -jar netx.jar syntax). Instead, netx.jar is added to the privileged boot classpath and is run specifying the main class directly. This allows us to download applications in sandbox mode. The wslauncher.sh script tries to find the Java executable file using the PATH and JAVA_HOME environment variables and then launches specified JNLP through netx.jar. There's more... The wslauncher.sh script provides a basic solution to run WebStart applications from the terminal. To integrate netx.jar into your operating system environment properly (to be able to launch WebStart apps using JNLP links from the web browser), a native launcher or custom platform scripting solution may be used. Such solutions lay down the scope of this book. Preparing the IcedTea Java WebStart implementation for Windows The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Windows; we also used it on Linux and Mac OS X in previous recipes in this article. In this recipe, we will create a simple implementation of the WebStart launcher script for Windows. Getting ready For this recipe, we will need a version of Windows running with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe in this article. How to do it... The following procedure will help you to run WebStart applications on Windows: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The netx.jar module must be added to the boot classpath as it cannot be run directly because of security reasons. The wslauncher.bat script tries to find the Java executable using the JAVA_HOME environment variable and then launches specified JNLP through netx.jar. There's more... The wslauncher.bat script may be registered as a default application to run the JNLP files. This will allow you to run WebStart applications from the web browser. But the current script will show the batch window for a short period of time before launching the application. It also does not support looking for Java executables in the Windows Registry. A more advanced script without those problems may be written using Visual Basic script (or any other native scripting solution) or as a native executable launcher. Such solutions lay down the scope of this book. Summary In this article we covered the configuration and installation of WebStart and browser plugin components, which are the biggest parts of the Iced Tea project.
Read more
  • 0
  • 0
  • 5061

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5049
Visually different images

article-image-introduction-spring-framework
Packt
30 Dec 2016
10 min read
Save for later

Introduction to Spring Framework

Packt
30 Dec 2016
10 min read
In this article by, Tejaswini Mandar Jog, author of the book, Learning Spring 5.0, we will cover the following topics: Introduction to Spring framework Problems address by Spring in enterprise application development Spring architecture What's new in Spring 5.0 Container Spring the fresh new start after the winter of traditional J2EE, is what Spring framework is in actual. A complete solution to the most of the problems occurred in handling the development of numerous complex modules collaborating with each other in a Java enterprise application. Spring is not a replacement to the traditional Java Development but it is a reliable solution to the companies to withstand in today's competitive and faster growing market without forcing the developers to be tightly coupled on Spring APIs. Problems addressed by Spring Java Platform is long term, complex, scalable, aggressive, and rapidly developing platform. The application development takes place on a particular version. The applications need to keep on upgrading to the latest version in order to maintain recent standards and cope up with them. These applications have numerous classes which interact with each other, reuse the APIs to take their fullest advantage so as to make the application is running smoothly. But this leads to some very common problems of as. Scalability The growth and development of each of the technologies in market is pretty fast both in hardware as well as software. The application developed, couple of years back may get outdated because of this growth in these areas. The market is so demanding that the developers need to keep on changing the application on frequent basis. That means whatever application we develop today should be capable of handling the upcoming demands and growth without affecting the working application. The scalability of an application is handling or supporting the handling of the increased load of the work to adapt to the growing environment instead of replacing them. The application when supports handling of increased traffic of website due to increase in numbers of users is a very simple example to call the application is scalable. As the code is tightly coupled, making it scalable becomes a problem. Plumbing code Let's take an example of configuring the DataSource in the Tomcat environment. Now the developers want to use this configured DataSource in the application. What will we do? Yes, we will do the JNDI lookup to get the DataSource. In order to handle JDBC we will acquire and then release the resources in try catch. The code like try catch as we discuss here, inter computer communication, collections too necessary but are not application specific are the plumbing codes. The plumbing code increases the length of the code and makes debugging complex. Boilerplate code How do we get the connection while doing JDBC? We need to register Driver class and invoke the getConnection() method on DriverManager to obtain the connection object. Is there any alternative to these steps? Actually NO! Whenever, wherever we have to do JDBC these same steps have to repeat every time. This kind of repetitive code, block of code which developer write at many places with little or no modification to achieve some task is called as boilerplate code. The boilerplate code makes the Java development unnecessarily lengthier and complex. Unavoidable non-functional code Whenever application development happens, the developer concentrate on the business logic, look and feel and persistency to be achieved. But along with these things the developers also give a rigorous thought on how to manage the transactions, how to handle increasing load on site, how to make the application secure and many more. If we give a close look, these things are not core concerns of the application but still these are unavoidable. Such kind of code which is not handling the business logic (functional) requirement but important for maintenance, trouble shooting, managing security of an application is called as non-functional code. In most of the Java application along with core concerns the developers have to write down non-functional code quite frequently. This leads to provide biased concentration on business logic development. Unit testing of the application Let's take an example. We want to test a code which is saving the data to the table in database. Here testing the database is not our motive, we just want to be sure whether the code which we have written is working fine or not. Enterprise Java application consists of many classes, which are interdependent. As there is dependency exists in the objects it becomes difficult to carry out the testing. POJO based development The class is a very basic structure of application development. If the class is getting extended or implementing an interface of the framework, reusing it becomes difficult as they are tightly coupled with API. The Plain Old Java Object (POJO) is very famous and regularly used terminology in Java application development. Unlike Struts and EJB Spring doesn't force developers to write the code which is importing or extending Spring APIs. The best thing about Spring is that developers can write the code which generally doesn't has any dependencies on framework and for this, POJOs are the favorite choice. POJOs support loosely coupled modules which are reusable and easy to test. The Spring framework is called to be non-invasive as it doesn't force the developer to use API classes or interfaces and allows to develop loosely coupled application. Loose coupling through DI Coupling, is the degree of knowledge in class has about the other. When a class is less dependent on the design of any other class, the class will be called as loosely coupled. Loose coupling can be best achieved by interface programming. In the Spring framework, we can keep the dependencies of the class separated from the code in a separate configuration file. Using interfaces and dependency injection techniques provided by Spring, developers can write loosely coupled code (Don't worry, very soon we will discuss about Dependency Injection and how to achieve it). With the help of loose coupling one can write a code which needs a frequent change, due to the change in the dependency it has. It makes the application more flexible and maintainable. Declarative programming In declarative programming, the code states what is it going to perform but not how it will be performed. This is totally opposite of imperative programming where we need to state stepwise what we will execute. The declarative programming can be achieved using XML and annotations. Spring framework keeps all configurations in XML from where it can be used by the framework to maintain the lifecycle of a bean. As the development happened in Spring framework, the 2.0 onward version gave an alternative to XML configuration with a wide range of annotations. Boilerplate code reduction using aspects and templates We just have discussed couple of pages back that repetitive code is boilerplate code. The boiler plate code is essential and without which providing transactions, security, logging, and so on, will become difficult. The framework gives solution of writing aspect which will deal with such cross cutting concerns and no need to write them along with business logic code. The use of aspect helps in reduction of boilerplate code but the developers still can achieve the same end effect. One more thing the framework provides, is the templates for different requirements. The JDBCTemplate and HibernateTemplate are couple of more useful concepts given by Spring which does reduction of boilerplate code. But as a matter of fact, you need to wait to understand and discover the actual potential. Layered architecture Unlike Struts and Hibernate which provides web persistency solutions respectively, Spring has a wide range of modules for numerous enterprise development problems. This layered architecture helps the developer to choose any one or more of the modules to write solution for his application in a coherent way. E.g. one can choose Web MVC module to handle web request efficiently without even knowing that there are many other modules available in the framework. Spring architecture Spring provides more than 20 different modules which can be broadly summaries under 7 main modules which are as follows: Spring modules What more Spring supports underneath? The following sections covers the additional features of Spring. Security module Now a days the applications alone with basic functionalities also need to provide sound ways to handle security at different levels. Spring5 support declarative security mechanism using Spring AOP. Batch module The Java Enterprise Applications needs to perform bulk processing, handling of large amount of data in many business solutions without user interactions. To handle such things in batches is the best solution available. Spring provides integration of batch processing to develop robust application. Spring integration In the development of enterprise application, the application may need interaction with them. Spring integration is extension of the core spring framework to provide integration of other enterprise applications with the help of declarative adapters. The messaging is one of such integration which is extensively supported by Spring. Mobile module The extensive use of mobiles opens the new doors in development. This module is an extension of Spring MVC which helps in developing mobile web applications known as Spring Android Project. It also provide detection of the type of device which is making the request and accordingly renders the views. LDAP module The basic aim of Spring was to simplify the development and to reduce the boilerplate code. The Spring LDAP module supports easy LDAP integration using template based development. .NEW module The new module has been introduced to support .NET platform. The modules like ADO.NET, NHibernate, ASP.NET has been in the .NET module includes to simplify the .NET development taking the advantages of features as DI, AOP, loose coupling. Container – the heart of Spring POJO development is the backbone of Spring framework. The POJO configured in the and whose object instantiation, object assembly, object management is done by Spring IoC container is called as bean or Spring bean. We use Spring IoC as it on the pattern of Inversion of Control. Inversion of Control (IoC) In every Java application, the first important thing which each developer does is, to get an object which he can use in the application. The state of an object can be obtained at runtime or it may be at compile time. But developers creates object where he use boiler plate code at a number of times. When the same developer uses Spring instead of creating object by himself he will be dependent on the framework to obtain object from. The term inversion of control comes as Spring container inverts the responsibility of object creation from developers. Spring IoC container is just a terminology, the Spring framework provides two containers: The BeanFactory The ApplicationContext Summary So in this article, we discussed about the general problems faced in Java enterprise application development and how they have been address by Spring framework. We have seen the overall major changes happened in each version of Spring from its first introduction in market. Enabling Spring Faces support Design with Spring AOP Getting Started with Spring Security
Read more
  • 0
  • 0
  • 5048

article-image-java-oracle-database
Packt
07 Jul 2010
5 min read
Save for later

Java in Oracle Database

Packt
07 Jul 2010
5 min read
"The views expressed in this article are the author's own and do not necessarily reflect the views of Oracle." (For more resources on Oracle, see here.) Introduction This article is better understood by people who have some familiarity with Oracle database, SQL, PL/SQL, and of course Java (including JDBC). Beginners can also understand the article to some extent, because it does not contain many specifics/details. The article can be useful to software developers, designers and architects working with Java. Oracle database provides a Java runtime in its database server process. Because of this, it is possible not only to store Java sources and Java classes in an Oracle database, but also to run the Java classes within the database server. Such Java classes will be 'executed' by the Java Virtual Machine embedded in the database server. The Java platform provided is J2SE-compliant, and in addition to the JVM, it includes all the Java system classes. So, conceptually, whatever Java code that can be run using the JREs (like Sun's JRE) on the operating system, can be run within the Oracle database too. Java stored procedure The key unit of the Java support inside the Oracle database is the 'Java Stored Procedure' (that may be referred to as JSP, as long as it is not confused with JavaServer Pages). A Java stored procedure is an executable unit stored inside the Oracle database, and whose implementation is in Java. It is similar to PL/SQL stored procedures and functions. Creation Let us see an example of how to create a simple Java stored procedure. We will create a Java stored procedure that adds two given numbers and returns the sum. The first step is to create a Java class that looks like the following: public class Math{ public static int add(int x, int y) { return x + y; }} This is a very simple Java class that just contains one static method that returns the sum of two given numbers. Let us put this code in a file called Math.java, and compile it (say, by doing 'javac Math.java') to get Math.class file. The next step is to 'load' Math.class into the Oracle database. That is, we have to put the class file located in some directory into the database, so that the class file gets stored in the database. There are a few ways to do this, and one of them is to use the command-line tool called loadjava provided by Oracle, as follows: loadjava -v -u scott/tiger Math.class Generally, in Oracle database, things are always stored in some 'schema' (also known as 'user'). Java classes are no exception. So, while loading a Java class file into the database, we need to specify the schema where the Java class should be stored. Here, we have given 'scott' (along with the password). There are a lot of other things that can be done using loadjava, but we will not go into them here. Next, we have to create a 'PL/SQL wrapper' as follows: SQL> connect scott/tigerConnected.SQL>SQL> create or replace function addition(a IN number, b IN number) return number 2 as language java name 'Math.add(int, int) return int'; 3 /Function created.SQL> We have created the PL/SQL wrapper called 'addition', for the Java method Math.add(). The syntax is same as the one used to create a PL/SQL function/procedure, but here we have specified that the implementation of the function is in the Java method Math.add(). And that's it. We've created a Java stored procedure! Basically, what we have done is, implemented our requirement in Java, and then exposed the Java implementation via PL/SQL. Using Jdeveloper, an IDE from Oracle, all these steps (creating the Java source, compiling it, loading it into the database, and creating the PL/SQL wrapper) can be done easily from within the IDE. One thing to remember is that, we can create Java stored procedures for Java static methods only, but not for instance methods. This is not a big disadvantage, and in fact makes sense, because even the main() method, which is the entry point for a Java program, is also 'static'. Here, since Math.add() is the entry point, it has to be 'static'. So, we can write as many static methods in our Java code as needed and make them entry points by creating the PL/SQL wrappers for them. Invocation We can call the Java stored procedure we have just created, just like any PL/SQL procedure/function is called, either from SQL or PL/SQL: SQL> select addition(10, 20) from dual;ADDITION(10,20)--------------- 30SQL>SQL> declare 2 s number; 3 begin 4 s := addition(10, 20); 5 dbms_output.put_line('SUM = ' || s); 6 end; 7 /SUM = 30PL/SQL procedure successfully completed.SQL> Here, the 'select' query, as well as the PL/SQL block, invoked the PL/SQL function addition(), which in turn invoked the underlying Java method Math.add(). A main feature of the Java stored procedure is that, the caller (like the 'select' query above) has no idea that the procedure is indeed implemented in Java. Thus, the stored procedures implemented in PL/SQL and Java can be called alike, without requiring to know the language in which the underlying implementation is. So, in general, whatever Java code we have, can be seamlessly integrated into the PL/SQL code via the PL/SQL wrappers. Putting in other words, we now have more than one language option to implement a stored procedure - PL/SQL and Java. If we have any project where stored procedures are to be implemented, then Java is a good option, because today it is relatively easier to find a Java programmer.
Read more
  • 0
  • 0
  • 5047

article-image-applying-linq-entities-wcf-service
Packt
05 Feb 2013
15 min read
Save for later

Applying LINQ to Entities to a WCF Service

Packt
05 Feb 2013
15 min read
(For more resources related to this topic, see here.) Creating the LINQNorthwind solution The first thing we need to do is create a test solution. In this article, we will start from the data access layer. Perform the following steps: Start Visual Studio. Create a new class library project LINQNorthwindDAL with solution name LINQNorthwind (make sure the Create directory for the solution is checked to specify the solution name). Delete the Class1.cs file. Add a new class ProductDAO to the project. Change the new class ProductDAO to be public. Now you should have a new solution with the empty data access layer class. Next, we will add a model to this layer and create the business logic layer and the service interface layer. Modeling the Northwind database In the previous section, we created the LINQNorthwind solution. Next, we will apply LINQ to Entities to this new solution. For the data access layer, we will use LINQ to Entities instead of the raw ADO.NET data adapters. As you will see in the next section, we will use one LINQ statement to retrieve product information from the database and the update LINQ statements will handle the concurrency control for us easily and reliably. As you may recall, to use LINQ to Entities in the data access layer of our WCF service, we first need to add an entity data model to the project. In the Solution Explorer, right-click on the project item LINQNorthwindDAL, select menu options Add | New Item..., and then choose Visual C# Items | ADO.NET Entity Data Model as Template and enter Northwind.edmx as the name. Select Generate from database, choose the existing Northwind connection, and add the Products table to the model. Click on the Finish button to add the model to the project. The new column RowVersion should be in the Product entity . If it is not there, add it to the database table with a type of Timestamp and refresh the entity data model from the database In the EMD designer, select the RowVersion property of the Product entity and change its Concurrency Mode from None to Fixed. Note that its StoreGeneratedPattern should remain as Computed. This will generate a file called Northwind.Context.cs, which contains the Db context for the Northwind database. Another file called Product.cs is also generated, which contains the Product entity class. You need to save the data model in order to see these two files in the Solution Explorer. In Visual Studio Solution Explorer, the Northwind.Context.cs file is under the template file Northwind.Context.tt and Product.cs is under Northwind.tt. However, in Windows Explorer, they are two separate files from the template files. Creating the business domain object project During Implementing a WCF Service in the Real World, we create a business domain object (BDO) project to hold the intermediate data between the data access objects and the service interface objects. In this section, we will also add such a project to the solution for the same purpose. In the Solution Explorer, right-click on the LINQNorthwind solution. Select Add | New Project... to add a new class library project named LINQNorthwindBDO. Delete the Class1.cs file. Add a new class file ProductBDO.cs. Change the new class ProductBDO to be public. Add the following properties to this class: ProductID ProductName QuantityPerUnit UnitPrice Discontinued UnitsInStock UnitsOnOrder ReorderLevel RowVersion The following is the code list of the ProductBDO class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LINQNorthwindBDO { public class ProductBDO { public int ProductID { get; set; } public string ProductName { get; set; } public string QuantityPerUnit { get; set; } public decimal UnitPrice { get; set; } public int UnitsInStock { get; set; } public int ReorderLevel { get; set; } public int UnitsOnOrder { get; set; } public bool Discontinued { get; set; } public byte[] RowVersion { get; set; } } } As noted earlier, in this article we will use BDO to hold the intermediate data between the data access objects and the data contract objects. Besides this approach, there are some other ways to pass data back and forth between the data access layer and the service interface layer, and two of them are listed as follows: The first one is to expose the Entity Framework context objects from the data access layer up to the service interface layer. In this way, both the service interface layer and the business logic layer—we will implement them soon in following sections—can interact directly with the Entity Framework. This approach is not recommended as it goes against the best practice of service layering. Another approach is to use self-tracking entities. Self-tracking entities are entities that know how to do their own change tracking regardless of which tier those changes are made on. You can expose self-tracking entities from the data access layer to the business logic layer, then to the service interface layer, and even share the entities with the clients. Because self-tracking entities are independent of entity context, you don't need to expose the entity context objects. The problem of this approach is, you have to share the binary files with all the clients, thus it is the least interoperable approach for a WCF service. Now this approach is not recommended by Microsoft, so in this book we will not discuss it. Using LINQ to Entities in the data access layer Next we will modify the data access layer to use LINQ to Entities to retrieve and update products. We will first create GetProduct to retrieve a product from the database and then create UpdateProduct to update a product in the database. Adding a reference to the BDO project Now we have the BDO project in the solution, we need to modify the data access layer project to reference it. In the Solution Explorer, right-click on the LINQNorthwindDAL project. Select Add Reference.... Select the LINQNorthwindBDO project from the Projects tab under Solution. Click on the OK button to add the reference to the project. Creating GetProduct in the data access layer We can now create the GetProduct method in the data access layer class ProductDAO, to use LINQ to Entities to retrieve a product from the database. We will first create an entity DbContext object and then use LINQ to Entities to get the product from the DbContext object. The product we get from DbContext will be a conceptual entity model object. However, we don't want to pass this product object back to the upper-level layer because we don't want to tightly couple the business logic layer with the data access layer. Therefore, we will convert this entity model product object to a ProductBDO object and then pass this ProductBDO object back to the upper-level layers. To create the new method, first add the following using statement to the ProductBDO class: using LINQNorthwindBDO; Then add the following method to the ProductBDO class: public ProductBDO GetProduct(int id) { ProductBDO productBDO = null; using (var NWEntities = new NorthwindEntities()) { Product product = (from p in NWEntities.Products where p.ProductID == id select p).FirstOrDefault(); if (product != null) productBDO = new ProductBDO() { ProductID = product.ProductID, ProductName = product.ProductName, QuantityPerUnit = product.QuantityPerUnit, UnitPrice = (decimal)product.UnitPrice, UnitsInStock = (int)product.UnitsInStock, ReorderLevel = (int)product.ReorderLevel, UnitsOnOrder = (int)product.UnitsOnOrder, Discontinued = product.Discontinued, RowVersion = product.RowVersion }; } return productBDO; } Within the GetProduct method, we had to create an ADO.NET connection, create an ADO. NET command object with that connection, specify the command text, connect to the Northwind database, and send the SQL statement to the database for execution. After the result was returned from the database, we had to loop through the DataReader and cast the columns to our entity object one by one. With LINQ to Entities, we only construct one LINQ to Entities statement and everything else is handled by LINQ to Entities. Not only do we need to write less code, but now the statement is also strongly typed. We won't have a runtime error such as invalid query syntax or invalid column name. Also, a SQL Injection attack is no longer an issue, as LINQ to Entities will also take care of this when translating LINQ expressions to the underlying SQL statements. Creating UpdateProduct in the data access layer In the previous section, we created the GetProduct method in the data access layer, using LINQ to Entities instead of ADO.NET. Now in this section, we will create the UpdateProduct method, using LINQ to Entities instead of ADO.NET. Let's create the UpdateProduct method in the data access layer class ProductBDO, as follows: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { message = "product updated successfully"; bool ret = true; using (var NWEntities = new NorthwindEntities()) { var productID = productBDO.ProductID; Product productInDB = (from p in NWEntities.Products where p.ProductID == productID select p).FirstOrDefault(); // check product if (productInDB == null) { throw new Exception("No product with ID " + productBDO.ProductID); } NWEntities.Products.Remove(productInDB); // update product productInDB.ProductName = productBDO.ProductName; productInDB.QuantityPerUnit = productBDO.QuantityPerUnit; productInDB.UnitPrice = productBDO.UnitPrice; productInDB.Discontinued = productBDO.Discontinued; productInDB.RowVersion = productBDO.RowVersion; NWEntities.Products.Attach(productInDB); NWEntities.Entry(productInDB).State = System.Data.EntityState.Modified; int num = NWEntities.SaveChanges(); productBDO.RowVersion = productInDB.RowVersion; if (num != 1) { ret = false; message = "no product is updated"; } } return ret; } Within this method, we first get the product from database, making sure the product ID is a valid value in the database. Then, we apply the changes from the passed-in object to the object we have just retrieved from the database, and submit the changes back to the database. Let's go through a few notes about this method: You have to save productID in a new variable and then use it in the LINQ query. Otherwise, you will get an error saying Cannot use ref or out parameter 'productBDO' inside an anonymous method, lambda expression, or query expression. If Remove and Attach are not called, RowVersion from database (not from the client) will be used when submitting to database, even though you have updated its value before submitting to the database. An update will always succeed, but without concurrency control. If Remove is not called and you call the Attach method, you will get an error saying The object cannot be attached because it is already in the object context. If the object state is not set to be Modified, Entity Framework will not honor your changes to the entity object and you will not be able to save any change to the database. Creating the business logic layer Now let's create the business logic layer. Right click on the solution item and select Add | New Project.... Add a class library project with the name LINQNorthwindLogic. Add a project reference to LINQNorthwindDAL and LINQNorthwindBDO to this new project. Delete the Class1.cs file. Add a new class file ProductLogic.cs. Change the new class ProductLogic to be public. Add the following two using statements to the ProductLogic.cs class file: using LINQNorthwindDAL; using LINQNorthwindBDO; Add the following class member variable to the ProductLogic class: ProductDAO productDAO = new ProductDAO(); Add the following new method GetProduct to the ProductLogic class: public ProductBDO GetProduct(int id) { return productDAO.GetProduct(id); } Add the following new method UpdateProduct to the ProductLogic class: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { var productInDB = GetProduct(productBDO.ProductID); // invalid product to update if (productInDB == null) { message = "cannot get product for this ID"; return false; } // a product cannot be discontinued // if there are non-fulfilled orders if (productBDO.Discontinued == true && productInDB.UnitsOnOrder > 0) { message = "cannot discontinue this product"; return false; } else { return productDAO.UpdateProduct(ref productBDO, ref message); } } Build the solution. We now have only one more step to go, that is, adding the service interface layer. Creating the service interface layer The last step is to create the service interface layer. Right-click on the solution item and select Add | New Project.... Add a WCF service library project with the name of LINQNorthwindService. Add a project reference to LINQNorthwindLogic and LINQNorthwindBDO to this new service interface project. Change the service interface file IService1.cs, as follows: Change its filename from IService1.cs to IProductService.cs. Change the interface name from IService1 to IProductService, if it is not done for you. Remove the original two service operations and add the following two new operations: [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); Remove the original CompositeType and add the following data contract classes: [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } The following is the content of the IProductService.cs file: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace LINQNorthwindService { [ServiceContract] public interface IProductService { [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); } [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } } Change the service implementation file Service1.cs, as follows: Change its filename from Service1.cs to ProductService.cs. Change its class name from Service1 to ProductService, if it is not done for you. Add the following two using statements to the ProductService.cs file: using LINQNorthwindLogic; using LINQNorthwindBDO; Add the following class member variable: ProductLogic productLogic = new ProductLogic(); Remove the original two methods and add following two methods: public Product GetProduct(int id) { ProductBDO productBDO = null; try { productBDO = productLogic.GetProduct(id); } catch (Exception e) { string msg = e.Message; string reason = "GetProduct Exception"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } if (productBDO == null) { string msg = string.Format("No product found for id {0}", id); string reason = "GetProduct Empty Product"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } Product product = new Product(); TranslateProductBDOToProductDTO(productBDO, product); return product; } public bool UpdateProduct(ref Product product, ref string message) { bool result = true; // first check to see if it is a valid price if (product.UnitPrice <= 0) { message = "Price cannot be <= 0"; result = false; } // ProductName can't be empty else if (string.IsNullOrEmpty(product.ProductName)) { message = "Product name cannot be empty"; result = false; } // QuantityPerUnit can't be empty else if (string.IsNullOrEmpty(product.QuantityPerUnit)) { message = "Quantity cannot be empty"; result = false; } else { try { var productBDO = new ProductBDO(); TranslateProductDTOToProductBDO(product, productBDO); result = productLogic.UpdateProduct( ref productBDO, ref message); product.RowVersion = productBDO.RowVersion; } catch (Exception e) { string msg = e.Message; throw new FaultException<ProductFault> (new ProductFault(msg), msg); } } return result; } Because we have to convert between the data contract objects and the business domain objects, we need to add the following two methods: private void TranslateProductBDOToProductDTO( ProductBDO productBDO, Product product) { product.ProductID = productBDO.ProductID; product.ProductName = productBDO.ProductName; product.QuantityPerUnit = productBDO.QuantityPerUnit; product.UnitPrice = productBDO.UnitPrice; product.Discontinued = productBDO.Discontinued; product.RowVersion = productBDO.RowVersion; } private void TranslateProductDTOToProductBDO( Product product, ProductBDO productBDO) { productBDO.ProductID = product.ProductID; productBDO.ProductName = product.ProductName; productBDO.QuantityPerUnit = product.QuantityPerUnit; productBDO.UnitPrice = product.UnitPrice; productBDO.Discontinued = product.Discontinued; productBDO.RowVersion = product.RowVersion; } Change the config file App.config, as follows: Change Service1 to ProductService. Remove the word Design_Time_Addresses. Change the port to 8080. Now, BaseAddress should be as follows: http://localhost:8080/LINQNorthwindService/ProductService/ Copy the connection string from the App.config file in the LINQNorthwindDAL project to the following App.config file: <connectionStrings> <add name="NorthwindEntities" connectionString="metadata=res://*/Northwind. csdl|res://*/Northwind.ssdl|res://*/Northwind. msl;provider=System.Data.SqlClient;provider connection string="data source=localhost;initial catalog=Northwind;integrated security=True;Multipl eActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings> You should leave the original connection string untouched in the App.config file in the data access layer project. This connection string is used by the Entity Model Designer at design time. It is not used at all during runtime, but if you remove it, whenever you open the entity model designer in Visual Studio, you will be prompted to specify a connection to your database. Now build the solution and there should be no errors. Testing the service with the WCF Test Client Now we can run the program to test the GetProduct and UpdateProduct operations with the WCF Test Client. You may need to run Visual Studio as administrator to start the WCF Test Client. First set LINQNorthwindService as the startup project and then press Ctrl + F5 to start the WCF Test Client. Double-click on the GetProduct operation, enter a valid product ID, and click on the Invoke button. The detailed product information should be retrieved and displayed on the screen, as shown in the following screenshot: Now double-click on the UpdateProduct operation, enter a valid product ID, and specify a name, price, quantity per unit, and then click on Invoke. This time you will get an exception as shown in the following screenshot: From this image we can see that the update failed. The error details, which are in HTML View in the preceding screenshot, actually tell us it is a concurrency error. This is because, from WCF Test Client, we can't enter a row version as it is not a simple datatype parameter, thus we didn't pass in the original RowVersion for the object to be updated, and when updating the object in the database, the Entity Framework thinks this product has been updated by some other users.
Read more
  • 0
  • 0
  • 5035
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-aspnet-mvc-2-validating-mvc
Packt
21 Jan 2011
5 min read
Save for later

ASP.NET MVC 2: Validating MVC

Packt
21 Jan 2011
5 min read
  ASP.NET MVC 2 Cookbook A fast-paced cookbook with recipes covering all that you wanted to know about developing with ASP.NET MVC Solutions to the most common problems encountered with ASP.NET MVC development Build and maintain large applications with ease using ASP.NET MVC Recipes to enhance the look, feel, and user experience of your web applications Expand your MVC toolbox with an introduction to lots of open source tools Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Introduction ASP.NET MVC provides a simple, but powerful, framework for validating forms. In this article, we'll start by creating a simple form, and then incrementally extend the functionality of our project to include client-side validation, custom validators, and remote validation. Basic input validation The moment you create an action to consume a form post, you're validating. Or at least the framework is. Whether it is a textbox validating to a DateTime, or checkbox to a Boolean, we can start making assumptions on what should be received and making provisions for what shouldn't. Let's create a form. How to do it... Create an empty ASP.NET MVC 2 project and add a master page called Site.Master to Views/Shared. In the models folder, create a new model called Person. This model is just an extended version of the Person class.Models/Person.cs: public class Person { [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Middle Name")] public string MiddleName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Birth Date")] public DateTime BirthDate { get; set; } public string Email { get; set; } public string Phone { get; set; } public string Postcode { get; set; } public string Notes { get; set; } } Create a controller called HomeController and amend the Index action to return a new instance of Person as the view model.Controllers/HomeController.cs: public ActionResult Index() { return View(new Person()); } Build and then right-click on the action to create an Index view. Make it an empty view that strongly types to our Person class. Create a basic form in the Index view.Views/Home/Index.aspx: <% using (Html.BeginForm()) {%> <%: Html.EditorForModel() %> <input type="submit" name="submit" value="Submit" /> <% } %> We'll go back to the home controller now to capture the form submission. Create a second action called Index, which accepts only POSTs.Controllers/HomeController.cs: [HttpPost] public ActionResult Index(... At this point, we have options. We can consume our form in a few different ways, let's have a look at a couple of them now:Controllers/HomeController.cs (Example): // Individual Parameters public ActionResult Index(string firstName, DateTime birthdate... // Model Public ActionResult Index(Person person) { Whatever technique you choose, the resolution of the parameters is roughly the same. The technique that I'm going to demonstrate relies on a method called UpdateModel. But first we need to differentiate our POST action from our first catch-all action. Remember, actions are just methods, and overrides need to take sufficiently different parameters to prevent ambiguity. We will do this by taking a single parameter of type FormCollection, though we won't necessarily make use of it.Controllers/HomeController.cs: [HttpPost] public ActionResult Index(FormCollection form) { var person = new Person(); UpdateModel(person); return View(person); } The UpdateModel technique is a touch more long-winded, but comes with advantages. The first is that if you add a breakpoint on the UpdateModel line, you can see the exact point when an empty model becomes populated with the form collection, which is great for demonstration purposes. The main reason I go back to UpdateModel time and time again, is the optional second parameter, includeProperties. This parameter allows you to selectively update the model, thereby bypassing validation on certain properties that you might want to handle independently. Build, run, and submit your form. If your page validates, your info should be returned back to you. However, add your birth date in an unrecognized format and watch it bomb. UpdateModel is a temperamental beast. Switch your UpdateModel for TryUpdateModel and see what happens. TryUpdateModel will return a Boolean indicating the success or failure of the submission. However, the most interesting thing is happening in the browser. How it works... With ASP.NET MVC, it sometimes feels like you're stripping the development process back to basics. I think this is a good thing; more control to render the page you want is good. But there is a lot of clever stuff going on in the background, starting off with Model Binders. When you send a request (GET, POST, and so on) to an ASP.NET MVC application, the query string, route values and the form collection are passed through model binding classes, which result in usable structures (for example, your action's input parameters). These model binders can be overridden and extended to deal with more complex scenarios, but since ASP.NET MVC2, I've rarely made use of this. A good starting point for further investigation would be with DefaultModelBinder and IModelBinder. What about that validation message in the last screenshot, where did it come from? Apart from LableFor and EditorFor, but we also have ValidationMessageFor. If the model binders fail at any point to build our input parameters, the model binder will add an error message to the model state. The model state is picked up and displayed by the ValidationMessageFor method, but more on that later.
Read more
  • 0
  • 0
  • 5018

article-image-debugging-your-net-application
Packt
21 Jul 2016
13 min read
Save for later

Debugging Your .NET Application

Packt
21 Jul 2016
13 min read
In this article by Jeff Martin, author of the book Visual Studio 2015 Cookbook - Second Edition, we will discuss about how but modern software development still requires developers to identify and correct bugs in their code. The familiar edit-compile-test cycle is as familiar as a text editor, and now the rise of portable devices has added the need to measure for battery consumption and optimization for multiple architectures. Fortunately, our development tools continue to evolve to combat this rise in complexity, and Visual Studio continues to improve its arsenal. (For more resources related to this topic, see here.) Multi-threaded code and asynchronous code are probably the two most difficult areas for most developers to work with, and also the hardest to debug when you have a problem like a race condition. A race condition occurs when multiple threads perform an operation at the same time, and the order in which they execute makes a difference to how the software runs or the output is generated. Race conditions often result in deadlocks, incorrect data being used in other calculations, and random, unrepeatable crashes. The other painful area to debug involves code running on other machines, whether it is running locally on your development machine or running in production. Hooking up a remote debugger in previous versions of Visual Studio has been less than simple, and the experience of debugging code in production was similarly frustrating. In this article, we will cover the following sections: Putting Diagnostic Tools to work Maximizing everyday debugging Putting Diagnostic Tools to work In Visual Studio 2013, Microsoft debuted a new set of tools called the Performance and Diagnostics hub. With VS2015, these tools have revised further, and in the case of Diagnostic Tools, promoted to a central presence on the main IDE window, and is displayed, by default, during debugging sessions. This is great for us as developers, because now it is easier than ever to troubleshoot and improve our code. In this section, we will explore how Diagnostic Tools can be used to explore our code, identify bottlenecks, and analyze memory usage. Getting ready The changes didn't stop when VS2015 was released, and succeeding updates to VS2015 have further refined the capabilities of these tools. So for this section, ensure that Update 2 has been installed on your copy of VS2015. We will be using Visual Studio Community 2015, but of course, you may use one of the premium editions too. How to do it… For this section, we will put together a short program that will generate some activity for us to analyze: Create a new C# Console Application, and give it a name of your choice. In your project's new Program.cs file, add the following method that will generate a large quantity of strings: static List<string> makeStrings() { List<string> stringList = new List<string>(); Random random = new Random(); for (int i = 0; i < 1000000; i++) { string x = "String details: " + (random.Next(1000, 100000)); stringList.Add(x); } return stringList; } Next we will add a second static method that produces an SHA256-calculated hash of each string that we generated. This method reads in each string that was previously generated, creates an SHA256 hash for it, and returns the list of computed hashes in the hex format. static List<string> hashStrings(List<string> srcStrings) { List<string> hashedStrings = new List<string>(); SHA256 mySHA256 = SHA256Managed.Create(); StringBuilder hash = new StringBuilder(); foreach (string str in srcStrings) { byte[] srcBytes = mySHA256.ComputeHash(Encoding.UTF8.GetBytes(str), 0, Encoding.UTF8.GetByteCount(str)); foreach (byte theByte in srcBytes) { hash.Append(theByte.ToString("x2")); } hashedStrings.Add(hash.ToString()); hash.Clear(); } mySHA256.Clear(); return hashedStrings; } After adding these methods, you may be prompted to add using statements for System.Text and System.Security.Cryptography. These are definitely needed, so go ahead and take Visual Studio's recommendation to have them added. Now we need to update our Main method to bring this all together. Update your Main method to have the following: static void Main(string[] args) { Console.WriteLine("Ready to create strings"); Console.ReadKey(true); List<string> results = makeStrings(); Console.WriteLine("Ready to Hash " + results.Count() + " strings "); //Console.ReadKey(true); List<string> strings = hashStrings(results); Console.ReadKey(true); } Before proceeding, build your solution to ensure everything is in working order. Now run the application in the Debug mode (F5), and watch how our program operates. By default, the Diagnostic Tools window will only appear while debugging. Feel free to reposition your IDE windows to make their presence more visible or use Ctrl + Alt + F2 to recall it as needed. When you first launch the program, you will see the Diagnostic Tools window appear. Its initial display resembles the following screenshot. Thanks to the first ReadKey method, the program will wait for us to proceed, so we can easily see the initial state. Note that CPU usage is minimal, and memory usage holds constant. Before going any further, click on the Memory Usage tab, and then the Take Snapshot command as indicated in the preceding screenshot. This will record the current state of memory usage by our program, and will be a useful comparison point later on. Once a snapshot is taken, your Memory Usage tab should resemble the following screenshot: Having a forced pause through our ReadKey() method is nice, but when working with real-world programs, we will not always have this luxury. Breakpoints are typically used for situations where it is not always possible to wait for user input, so let's take advantage of the program's current state, and set two of them. We will put one to the second WriteLine method, and one to the last ReadKey method, as shown in the following screenshot: Now return to the open application window, and press a key so that execution continues. The program will stop at the first break point, which is right after it has generated a bunch of strings and added them to our List object. Let's take another snapshot of the memory usage using the same manner given in Step 9. You may also notice that the memory usage displayed in the Process Memory gauge has increased significantly, as shown in this screenshot: Now that we have completed our second snapshot, click on Continue in Visual Studio, and proceed to the next breakpoint. The program will then calculate hashes for all of the generated strings, and when this has finished, it will stop at our last breakpoint. Take another snapshot of the memory usage. Also take notice of how the CPU usage spiked as the hashes were being calculated: Now that we have these three memory snapshots, we will examine how they can help us. You may notice how memory usage increases during execution, especially from the initial snapshot to the second. Click on the second snapshot's object delta, as shown in the following screenshot: On clicking, this will open the snapshot details in a new editor window. Click on the Size (Bytes) column to sort by size, and as you may suspect, our List<String> object is indeed the largest object in our program. Of course, given the nature of our sample program, this is fairly obvious, but when dealing with more complex code bases, being able to utilize this type of investigation is very helpful. The following screenshot shows the results of our filter: If you would like to know more about the object itself (perhaps there are multiple objects of the same type), you can use the Referenced Types option as indicated in the preceding screenshot. If you would like to try this out on the sample program, be sure to set a smaller number in the makeStrings() loop, otherwise you will run the risk of overloading your system. Returning to the main Diagnostic Tools window, we will now examine CPU utilization. While the program is executing the hashes (feel free to restart the debugging session if necessary), you can observe where the program spends most of its time: Again, it is probably no surprise that most of the hard work was done in the hashStrings() method. But when dealing with real-world code, it will not always be so obvious where the slowdowns are, and having this type of insight into your program's execution will make it easier to find areas requiring further improvement. When using the CPU profiler in our example, you may find it easier to remove the first breakpoint and simply trigger a profiling by clicking on Break All as shown in this screenshot: How it works... Microsoft wanted more developers to be able to take advantage of their improved technology, so they have increased its availability beyond the Professional and Enterprise editions to also include Community. Running your program within VS2015 with the Diagnostic Tools window open lets you examine your program's performance in great detail. By using memory snapshots and breakpoints, VS2015 provides you with the tools needed to analyze your program's operation, and determine where you should spend your time making optimizations. There's more… Our sample program does not perform a wide variety of tasks, but of course, more complex programs usually perform well. To further assist with analyzing those programs, there is a third option available to you beyond CPU Usage and Memory Usage: the Events tab. As shown in the following screenshot, the Events tab also provides the ability to search events for interesting (or long-running) activities. Different event types include file activity, gestures (for touch-based apps), and program modules being loaded or unloaded. Maximizing everyday debugging Given the frequency of debugging, any refinement to these tools can pay immediate dividends. VS 2015 brings the popular Edit and Continue feature into the 21st century by supporting a 64-bit code. Added to that is the new ability to see the return value of functions in your debugger. The addition of these features combine to make debugging code easier, allowing to solve problems faster. Getting ready For this section, you can use VS 2015 Community or one of the premium editions. Be sure to run your choice on a machine using a 64-bit edition of Windows, as that is what we will be demonstrating in the section. Don't worry, you can still use Edit and Continue with 32-bit C# and Visual Basic code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU, and select Configuration Manager... When the Configuration Manager dialog opens, we can create a new project platform targeting a 64-bit code. To do this, click on the drop-down menu for Platform, and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to Configuration Manager. Verify that x64 remains active under Platform, and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: With the project settings out of the face, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border, or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method, and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use whatever method you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5, or by clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation, as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). (If you don't see Autos, it can be made visible by pressing Ctrl + D, A, or through Debug | Windows | Autos while debugging.) After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for the area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet in the following screenshot—Step Over once more if you would like to see that assigned): There's more… Programmers who write C++ have already had the ability to see the return values of functions—this just brings .NET developers into the fold. The result is that your development experience won't have to suffer based on the language you have chosen to use for your project. The Edit and Continue functionality is also available for ASP.NET projects. New projects created on VS2015 will have Edit and Continue enabled by default. Existing projects imported to VS2015 will usually need this to be enabled if it hasn't been done already. To do so, open the Options dialog via Tools | Options, and look for the Debugging | General section. The following screenshot shows where this option is located on the properties page: Whether you are working with an ASP.NET project or a regular C#/VB .NET application, you can verify Edit and Continue is set via this location. Summary In this article, we examine the improvements to the debugging experience in Visual Studio 2015, and how it can help you diagnose the root cause of a problem faster so that you can fix it properly, and not just patch over the symptoms. Resources for Article:   Further resources on this subject: Creating efficient reports with Visual Studio [article] Creating efficient reports with Visual Studio [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article]
Read more
  • 0
  • 0
  • 4975

article-image-entering-people-information
Packt
24 Jun 2015
9 min read
Save for later

Entering People Information

Packt
24 Jun 2015
9 min read
In this article by Pravin Ingawale, author of the book Oracle E-Business Suite R12.x HRMS – A Functionality Guide, will learn about entering a person's information in Oracle HRMS. We will understand the hiring process in Oracle. This, actually, is part of the Oracle I-recruitment module in Oracle apps. Then we will see how to create an employee in Core HR. Then, we will learn the concept of person types and defining person types. We will also learn about entering information for an employee, including additional information. Let's see how to create an employee in core HR. (For more resources related to this topic, see here.) Creating an employee An employee is the most important entity in an organization. Before creating an employee, the HR officer must know the date from which the employee will be active in the organization. In Oracle terminology, you can call it the employee's hire date. Apart from this, the HR officer must know basic details of the employee such as first name, last name, date of birth, and so on. Navigate to US HRMS Manager | People | Enter and Maintain. This is the basic form, called People in Oracle HRMS, which is used to create an employee in the application. As you can see in the form, there is a field named Last, which is marked in yellow. This indicates that this is mandatory to create an employee record. First, you need to set the effective date on the form. You can set this by clicking on the icon, as shown in the following screenshot: You need to enter the mandatory field data along with additional data. The following screenshot shows the data entered: Once you enter the required data, you need to specify the action for the entered record. The action we have selected is Create Employment. The Create Employment action will create an employee in the application. There are other actions such as Create Applicant, which is used to create an applicant for I-Recruitment. The Create Placement action is used to create a contingent worker in your enterprise. Once you select this action, it will prompt you to enter the person type of this employee as in the following screenshot. Select the Person Type as Employee and save the record. We will see the concept of person type in the next section. Once you select the employee person type and then save the record, the system will automatically generate the employee number for the person. In our case, the system has generated an employee number 10160. So now, we have created an employee in the application. Concept of person types In any organization, you need to identify different types of people. Here, you can say that you need to group different types of people. There are basically three types of people you capture in HRMS system. They are as follows: Employees: These include current employees and past employees. Past employees are those who were part of your enterprise earlier and are no longer active in the system. You can call them terminated or ex-employees. Applicants: If you are using I-recruitment, applicants can be created. External people: Contact is a special category of external type. Contacts are associated with an employee or an applicant. For example, there might be a need to record the name, address, and phone number of an emergency contact for each employee in your organization. There might also be a need to keep information on dependents of an employee for medical insurance purposes or for some payments in payroll processing. Using person types There are predefined person types in Oracle HRMS. You can add more person types as per your requirements. You can also change the name of existing person types when you install the system. Let's take an example for your understanding. Your organization has employees. There might be employees of different types; you might have regular employees and employees who are contractors in your organization. Hence, you can categorize employees in your organization into two types: Regular employees Consultants The reason for creating these categories is to easily identify the employee type and store different types of information for each category. Similarly, if you are using I-recruitment, then you will have candidates. Hence, you can categorize candidates into two types. One will be internal candidate and the other will be external candidate. Internal candidates will be employees within your organization who can apply for an opening within your organization. An external candidate is an applicant who does not work for your organization but is applying for a position that is open in your company. Defining person types In an earlier section, you learned the concept of person types, and now you will learn how to define person types in the system. Navigate to US HRMS Manager | Other Definitions | Person Types. In the preceding screenshot, you can see four fields, that is, User Name, System Name, Active, and Default flag. There are eight person types recognized by the system and identified by a system name. For each system name, there are predefined usernames. A username can be changed as per your needs. There must be one username that should be the default. While creating an employee, the person types that are marked by the default flag will come by default. To change a username for a person type, delete the contents of the User Name field and type the name you'd prefer to keep. To add a new username to a person type system name: Select New Record from the Edit menu. Enter a unique username and select the system name you want to use. Deactivating person types You cannot delete person types, but you can deactivate them by unchecking the Active checkbox. Entering personal and additional information Until now, you learned how to create an employee by entering basic details such as title, gender, and date of birth. In addition to this, you can enter some other information for an employee. As you can see on the people form, there are various tabs such as Employment, Office details, Background, and so on. Each tab has some fields that can store information. For example, in our case, we have stored the e-mail address of the employee in the Office Details tab. Whenever you enter any data for an employee and then click on the Save button, it will give you two options as shown in the following screenshot: You have to select one of the options to save the data. The differences between both the options are explained with an example. Let's say you have hired a new employee as of 01-Jan-2014. Hence, a new record will be created in the application with the start date as 01-Jan-2014. This is called an effective start date of the record. There is no end date for this record, so Oracle gives it a default end date, which is 31-Dec-4712. This is called the effective end date of the record. Now, in our case, Oracle has created a single record with the start date and end date as 01-Jan-2014 and 31-Dec-4712, respectively. When we try to enter additional data for this record (in our case, it is phone number) then Oracle will prompt you to select the Correction or Update option. This is called the date-tracked option. If you select the correction mode, then Oracle will update an existing record in the application. Now, if you date track to, say, 01-Aug-2014 and then enter the phone number and select the update mode, then it will end the historical data with the new date minus one and create a new record with the start date 01-Aug-2014 with the phone number that you have entered. Thus, the historical data will be preserved and a new record will be created with the start date 01-Aug-2014 and a phone number. The following tabular representation will help you understand better in Correction mode: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Dec-4712 +0099999999 Now, if you want to change the phone number from 01-Aug-2014 in Update mode (date 01-Aug-2014), then the record will be as follows: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Jul-2014 +0099999999 10160 Test010114 01-Aug-2014 31-Jul-2014 +0088888888 Thus, in update mode, you can see that historical data is intact. If HR wants to view some historical data, then the HR employee can easily view this data. Everything associated with Oracle HRMS is date-tracked. Every characteristic about the organization, person, position, salary, and benefits is tightly date-tracked. This concept is very important in Oracle and is used in almost all the forms in which you store employee-related information. Thus, you have learned about the date tracking concept in Oracle APPS. There are some additional fields, which can be configured as per your requirements. Additional personal data can be stored in these fields. These are called as descriptive flexfields in Oracle. We created personal DFF to store data about Years of Industry Experience and whether an employee is Oracle Certified or not. This data can be stored in the People form DFF as marked in the following screenshot: When you click on the box, it will open the new form as shown in the following screenshot. Here, you can enter the additional data. This is called Additional Personal Details DFF. It is stored in personal data; this is normally referred to as the People form DFF. We have created a Special Information Types (SIT) to store information on languages known by an employee. This data will have two attributes, namely, the language known and the fluency. This can be entered by navigating to US HRMS Manager | People | Enter and Maintain | Special Info. Click on the Details section. This will open a new form to enter the required details. Each record in the SIT is date-tracked. You can enter the start date and the end date. Thus, we have seen DFF in which you stored additional person data and we have seen KFF, where you enter the SIT data. Summary In this article, you have learned about creating a new employee, entering employee data, and additional data using DFF and KFF. You also learned the concept of person type. Resources for Article: Further resources on this subject: Knowing the prebuilt marketing, sales, and service organizations [article] Oracle E-Business Suite with Desktop Integration [article] Oracle Integration and Consolidation Products [article]
Read more
  • 0
  • 0
  • 4947

article-image-normalizing-dimensional-model
Packt
08 Feb 2010
3 min read
Save for later

Normalizing Dimensional Model

Packt
08 Feb 2010
3 min read
First Normal Form Violation in Dimension table Let’s revisit the author problem in the book dimension. The AUTHOR column contains multiple authors, a first-normal-form violation, which prevents us from querying book or sales by author. BOOK dimension table BOOK_SK TITLE AUTHOR PUBLISHER CATEGORY SUB-CATEGORY 1 Programming in Java King, Chan Pac Programming Java 2 Learning Python Simpson Pac Programming Python 3 Introduction to BIRT Chan, Gupta, Simpson (Editor) Apes Reporting BIRT 4 Advanced Java King, Chan Apes Programming Java Normalizing and spinning off the authors into a dimension and adding an artificial BOOK AUTHOR fact solves the problem; it is an artificial fact as it does not contain any real business measure.  Note that the Editor which is an author’s role is also “normalized” into its column in the AUTHOR table (It is related to author, but not actually an author’s name). AUTHOR table AUTHOR_SK AUTHOR_NAME 1 King 2 Chan 3 Simpson 4 Gupta BOOK AUTHOR table BOOK_SK AUTHOR_SK ROLE COUNT 1 1 Co-author 1 1 2 Co-author 1 2 3 Author 1 3 2 Co-author 1 3 3 Editor 1 3 4 Co-author 1 4 1 Co-author 1 4 2 Co-author 1 Note the artificial COUNT measure which facilitates aggregation always has a value of numeric 1. SELECT name, SUM(COUNT) FROM book_author ba, book_dim b, author_dim aWHERE ba.book_sk = b.book_sk AND ba.author_sk = a.author_skGROUP BY name You might need to query sales by author, which you can do so by combining the queries of each of the two stars (the two facts) on their common dimension (BOOK dimension), producing daily book sales by author. SELECT dt, title, name, role, sales_amt FROM (SELECT book_sk, dt, title, sales_amt FROM sales_fact s, date_dim d, book_dim b WHERE s.book_sk = b.book_sk AND s.date_sk = d.date_sk) sales, (SELECT b.book_sk, name, role FROM book_author ba, book_dim b, author_dim a WHERE ba.book_sk = b.book_sk AND ba.author_sk = a.author_sk) authorWHERE sales.book_sk = author.book_sk Single Column with Repeating Value in Dimension table Columns like the PUBLISHER, though not violating any normal form, is also good to get normalized, which we accomplish by adding an artificial fact, PUBLISHED BOOK fact, and its own dimension, PUBLISHER dimension. This normalization is not exactly the same as that in normalizing first-normal-form violation; the PUBLISHER dimension can correctly be linked to the SALES fact, the publisher surrogate key must be added though in the SALES fact. PUBLISHER table   PUBLISHER_SK PUBLISHER 1 Pac 2 Apes BOOK PUBLISHER table BOOK_SK PUBLISHER_SK COUNT 1 1 1 1 2 1 2 3 1 3 2 1 3 3 1 3 4 1 4 1 1 4 2 1 Related Columns with Repeating Value in Dimension table CATEGORY and SUB-CATEGORY columns are related, they form a hierarchy. Each of them can be normalized into its own dimension, but they need to be all linked into one artificial fact. Non-Measure Column in Fact table The ROLE column inside the BOOK AUTHOR fact is not a measure; it violates the dimensional modeling norm; to resolve we just need to spin it off into its own dimension, effectively normalizing the fact table. ROLE dimension table and sample rows   ROLE_SK ROLE 1 Author 2 Co-Author 3 Editor BOOK AUTHOR table with normalized ROLE BOOK_SK AUTHOR_SK ROLE_SK COUNT 1 1 2 1 1 2 2 1 2 3 1 1 3 2 2 1 3 3 3 1 3 4 2 1 4 1 2 1 4 2 2 1   Summary This article shows that both dimensional table and fact table in a dimensional model can be normalized without violating its modeling norm. If you have read this article, you may be interested to view : Solving Many-to-Many Relationship in Dimensional Modeling
Read more
  • 0
  • 0
  • 4834
article-image-your-first-fuelphp-application-7-easy-steps
Packt
04 Mar 2015
12 min read
Save for later

Your first FuelPHP application in 7 easy steps

Packt
04 Mar 2015
12 min read
In this article by Sébastien Drouyer, author of the book FuelPHP Application Development Blueprints we will see that FuelPHP is an open source PHP framework using the latest technologies. Its large community regularly creates and improves packages and extensions, and the framework’s core is constantly evolving. As a result, FuelPHP is a very complete solution for developing web applications. (For more resources related to this topic, see here.) In this article, we will also see how easy it is for developers to create their first website using the PHP oil utility. The target application Suppose you are a zoo manager and you want to keep track of the monkeys you are looking after. For each monkey, you want to save: Its name If it is still in the zoo Its height A description input where you can enter custom information You want a very simple interface with five major features. You want to be able to: Create new monkeys Edit existing ones List all monkeys View a detailed file for each monkey Delete monkeys These preceding five major features, very common in computer applications, are part of the Create, Read, Update and Delete (CRUD) basic operations. Installing the environment The FuelPHP framework needs the three following components: Webserver: The most common solution is Apache PHP interpreter: The 5.3 version or above Database: We will use the most popular one, MySQL The installation and configuration procedures of these components will depend on the operating system you use. We will provide here some directions to get you started in case you are not used to install your development environment. Please note though that these are very generic guidelines. Feel free to search the web for more information, as there are countless resources on the topic. Windows A complete and very popular solution is to install WAMP. This will install Apache, MySQL and PHP, in other words everything you need to get started. It can be accessed at the following URL: http://www.wampserver.com/en/ Mac PHP and Apache are generally installed on the latest version of the OS, so you just have to install MySQL. To do that, you are recommended to read the official documentation: http://dev.mysql.com/doc/refman/5.1/en/macosx-installation.html A very convenient solution for those of you who have the least system administration skills is to install MAMP, the equivalent of WAMP but for the Mac operating system. It can be downloaded through the following URL: http://www.mamp.info/en/downloads/ Ubuntu As this is the most popular Linux distribution, we will limit our instructions to Ubuntu. You can install a complete environment by executing the following command lines: # Apache, MySQL, PHP sudo apt-get install lamp-server^   # PHPMyAdmin allows you to handle the administration of MySQL DB sudo apt-get install phpmyadmin   # Curl is useful for doing web requests sudo apt-get install curl libcurl3 libcurl3-dev php5-curl   # Enabling the rewrite module as it is needed by FuelPHP sudo a2enmod rewrite   # Restarting Apache to apply the new configuration sudo service apache2 restart Getting the FuelPHP framework There are four common ways to download FuelPHP: Downloading and unzipping the compressed package which can be found on the FuelPHP website. Executing the FuelPHP quick command-line installer. Downloading and installing FuelPHP using Composer. Cloning the FuelPHP GitHub repository. It is a little bit more complicated but allows you to select exactly the version (or even the commit) you want to install. The easiest way is to download and unzip the compressed package located at: http://fuelphp.com/files/download/28 You can get more information about this step in Chapter 1 of FuelPHP Application Development Blueprints, which can be accessed freely. It is also well-documented on the website installation instructions page: http://fuelphp.com/docs/installation/instructions.html Installation directory and apache configuration Now that you know how to install FuelPHP in a given directory, we will explain where to install it and how to configure Apache. The simplest way The simplest way is to install FuelPHP in the root folder of your web server (generally the /var/www directory on Linux systems). If you install fuel in the DIR directory inside the root folder (/var/www/DIR), you will be able to access your project on the following URL: http://localhost/DIR/public/ However, be warned that fuel has not been implemented to support this, and if you publish your project this way in the production server, it will introduce security issues you will have to handle. In such cases, you are recommended to use the second way we explained in the section below, although, for instance if you plan to use a shared host to publish your project, you might not have the choice. A complete and up to date documentation about this issue can be found in the Fuel installation instruction page: http://fuelphp.com/docs/installation/instructions.html By setting up a virtual host Another way is to create a virtual host to access your application. You will need a *nix environment and a little bit more apache and system administration skills, but the benefit is that it is more secured and you will be able to choose your working directory. You will need to change two files: Your apache virtual host file(s) in order to link a virtual host to your application Your system host file, in order redirect the wanted URL to your virtual host In both cases, the files location will be very dependent on your operating system and the server environment you are using, so you will have to figure their location yourself (if you are using a common configuration, you won’t have any problem to find instructions on the web). In the following example, we will set up your system to call your application when requesting the my.app URL on your local environment. Let’s first edit the virtual host file(s); add the following code at the end: <VirtualHost *:80>    ServerName my.app    DocumentRoot YOUR_APP_PATH/public    SetEnv FUEL_ENV "development"    <Directory YOUR_APP_PATH/public>        DirectoryIndex index.php        AllowOverride All        Order allow,deny        Allow from all    </Directory> </VirtualHost> Then, open your system host files and add the following line at the end: 127.0.0.1 my.app Depending on your environment, you might need to restart Apache after that. You can now access your website on the following URL: http://my.app/ Checking that everything works Whether you used a virtual host or not, the following should now appear when accessing your website: Congratulation! You just have successfully installed the FuelPHP framework. The welcome page shows some recommended directions to continue your project. Database configuration As we will store our monkeys into a MySQL database, it is time to configure FuelPHP to use our local database. If you open fuel/app/config/db.php, all you will see is an empty array but this configuration file is merged to fuel/app/config/ENV/db.php, ENV being the current Fuel’s environment, which in that case is development. You should therefore open fuel/app/config/development/db.php: <?php //... return array( 'default' => array(    'connection' => array(      'dsn'       => 'mysql:host=localhost;dbname=fuel_dev',      'username'   => 'root',      'password'   => 'root',    ), ), ); You should adapt this array to your local configuration, particularly the database name (currently set to fuel_dev), the username, and password. You must create your project’s database manually. Scaffolding Now that the database configuration is set, we will be able to generate a scaffold. We will use for that the generate feature of the oil utility. Open the command-line utility and go to your website root directory. To generate a scaffold for a new model, you will need to enter the following line: php oil generate scaffold/crud MODEL ATTR_1:TYPE_1 ATTR_2:TYPE_2 ... Where: MODEL is the model name ATTR_1, ATTR_2… are the model’s attributes names TYPE_1, TYPE_2… are each attribute type In our case, it should be: php oil generate scaffold/crud monkey name:string still_here:bool height:float description:text Here we are telling oil to generate a scaffold for the monkey model with the following attributes: name: The name of the monkey. Its type is string and the associated MySQL column type will be VARCHAR(255). still_here: Whether or not the monkey is still in the facility. Its type is boolean and the associated MySQL column type will be TINYINT(1). height: Height of the monkey. Its type is float and its associated MySQL column type will be FLOAT. description: Description of the monkey. Its type is text and its associated MySQL column type will be TEXT. You can do much more using the oil generate feature, as generating models, controllers, migrations, tasks, package and so on. We will see some of these in the FuelPHP Application Development Blueprints book and you are also recommended to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/generate.html When you press Enter, you will see the following lines appear: Creating migration: APPPATH/migrations/001_create_monkeys.php Creating model: APPPATH/classes/model/monkey.php Creating controller: APPPATH/classes/controller/monkey.php Creating view: APPPATH/views/monkey/index.php Creating view: APPPATH/views/monkey/view.php Creating view: APPPATH/views/monkey/create.php Creating view: APPPATH/views/monkey/edit.php Creating view: APPPATH/views/monkey/_form.php Creating view: APPPATH/views/template.php Where APPPATH is your website directory/fuel/app. Oil has generated for us nine files: A migration file, containing all the necessary information to create the model’s associated table The model A controller Five view files and a template file More explanation about these files and how they interact with each other can be accessed in Chapter 1 of the FuelPHP Application Development Blueprints book, freely available. For those of you who are not yet familiar with MVC and HMVC frameworks, don’t worry; the chapter contains an introduction to the most important concepts. Migrating One of the generated files was APPPATH/migrations/001_create_monkeys.php. It is a migration file and contains the required information to create our monkey table. Notice the name is structured as VER_NAME where VER is the version number and NAME is the name of the migration. If you execute the following command line: php oil refine migrate All migrations files that have not been yet executed will be executed from the oldest version to the latest version (001, 002, 003, and so on). Once all files are executed, oil will display the latest version number. Once executed, if you take a look at your database, you will observe that not one, but two tables have been created: monkeys: As expected, a table have been created to handle your monkeys. Notice that the table name is the plural version of the word we typed for generating the scaffold; such a transformation was internally done using the Inflector::pluralize method. The table will contain the specified columns (name, still_here), the id column, but also created_at and updated_at. These columns respectively store the time an object was created and updated, and are added by default each time you generate your models. It is though possible to not generate them with the --no-timestamp argument. migration: This other table was automatically created. It keeps track of the migrations that were executed. If you look into its content, you will see that it already contains one row; this is the migration you just executed. You can notice that the row does not only indicate the name of the migration, but also a type and a name. This is because migrations files can be placed at many places such as modules or packages. The oil utility allows you to do much more. Don’t hesitate to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/intro.html Or, again, to read FuelPHP Application Development Blueprints’ Chapter 1 which is available for free. Using your application Now that we generated the code and migrated the database, our application is ready to be used. Request the following URL: If you created a virtual host: http://my.app/monkey Otherwise (don’t forget to replace DIR): http://localhost/DIR/public/monkey As you can notice, this webpage is intended to display the list of all monkeys, but since none have been added, the list is empty. Then let’s add a new monkey by clicking on the Add new Monkey button. The following webpage should appear: You can enter your monkey’s information here. The form is certainly not perfect - for instance the Still here field use a standard input although a checkbox would be more appropriated - but it is a great start. All we will have to do is refine the code a little bit. Once you have added several monkeys, you can again take a look at the listing page: Again, this is a great start, though we might want to refine it. Each item on the list has three associated actions: View, Edit, and Delete. Let’s first click on View: Again a great start, though we will refine this webpage. You can return back to the listing by clicking on Back or edit the monkey file by clicking on Edit. Either accessed from the listing page or the view page, it will display the same form as when creating a new monkey, except that the form will be prefilled of course. Finally, if you click on Delete, a confirmation box will appear to prevent any miss clicking. Want to learn more ? Don’t hesitate to check out FuelPHP Application Development Blueprints’ Chapter 1 which is freely available in Packt Publishing’s website. In this chapter, you will find a more thorough introduction to FuelPHP and we will show how to improve this first application. You are also recommended to explore FuelPHP website, which contains a lot of useful information and an excellent documentation: http://www.fuelphp.com There is much more to discover about this wonderful framework. Summary In this article we leaned about the installation of the FuelPHP environment and installation of directories in it. Resources for Article: Further resources on this subject: PHP Magic Features [Article] FuelPHP [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 1
  • 4833

article-image-integrating-biztalk-server-and-microsoft-dynamics-crm
Packt
20 Jul 2011
7 min read
Save for later

Integrating BizTalk Server and Microsoft Dynamics CRM

Packt
20 Jul 2011
7 min read
What is Microsoft Dynamics CRM? Customer relationship management is a critical part of virtually every business. Dynamics CRM 2011 offers a solution for the three traditional areas of CRM: sales, marketing, and customer service. For customers interested in managing a sales team, Dynamics CRM 2011 has a strong set of features. This includes organizing teams into territories, defining price lists, managing opportunities, maintaining organization structures, tracking sales pipelines, enabling mobile access, and much more. If you are using Dynamics CRM 2011 for marketing efforts, then you have the ability to import data from multiple sources, plan campaigns and set up target lists, create mass communications, track responses to campaigns, share leads with the sales team, and analyze the success of a marketing program. Dynamics CRM 2011 also serves as a powerful hub for customer service scenarios. Features include rich account management, case routing and management, a built-in knowledge base, scheduling of call center resources, scripted Q&A workflows called Dialogs, contract management, and more. Besides these three areas, Microsoft pitches Dynamics CRM as a general purpose application platform called xRM, where the "X" stands for any sort of relationship management. Dynamics CRM has a robust underlying framework for screen design, security roles, data auditing, entity definition, workflow, and mobility, among others. Instead of building these foundational aspects into every application, we can build our data-driven applications within Dynamics CRM. Microsoft has made a big move into the cloud with this release of Dynamics CRM 2011. For the first time in company history, a product was released online (Dynamics CRM Online) prior to on-premises software. The hosted version of the application runs an identical codebase to the on-premises version meaning that code built to support a local instance will work just fine in the cloud. In addition to the big play in CRM hosting, Microsoft has also baked Windows Azure integration into Dynamics CRM 2011. Specifically, we now have the ability to configure a call-out to an Azure AppFabric Service Bus endpoint. To do this, the downstream service must implement a specific WCF interface and within CRM, the Azure AppFabric plugin is configured to call that downstream service through the Azure AppFabric Service Bus relay service. For BizTalk Server to accommodate this pattern, we would want to build a proxy service that implements the required Dynamics CRM 2011 interface and forwards requests into a BizTalk Server endpoint. This article will not demonstrate this scenario, however, as the focus will be on integrating with an onpremises instance only. Why Integrate Dynamics CRM and BizTalk Server? There are numerous reasons to tie these two technologies together. Recall that BizTalk Server is an enterprise integration bus that connects disparate applications. There can be a natural inclination to hoard data within a particular application, but if we embrace real-time message exchange, we can actually have a more agile enterprise. Consider a scenario when a customer's full "contact history" resides in multiple systems. The Dynamics CRM 2011 contact center may only serve a specific audience, and other systems within the company hold additional details about the company's customers. One design choice could be to bulk load that information into Dynamics CRM 2011 on a scheduled interval. However, it may be more effective to call out to a BizTalk Server service that aggregates data across systems and returns a composite view of a customer's history with a company. In a similar manner, think about how information is shared between systems. A public website for a company may include a registration page where visitors sign up for more information and deeper access to content. That registration event is relevant to multiple systems within the company. We could send that initial registration message to BizTalk Server and then broadcast that message to the multiple systems that want to know about that customer. A marketing application may want to respond with a personalized email welcoming that person to the website. The sales team may decide to follow up with that person if they expressed interest in purchasing products. Our Dynamics CRM 2011 customer service center could choose to automatically add the registration event so that it is ready whenever that customer calls in. In this case, BizTalk Server acts as a central router of data and invokes the exposed Dynamics CRM services to create customers and transactions. Communicating from BizTalk Server to Dynamics CRM The way that you send requests from BizTalk Server to Dynamics CRM 2011 has changed significantly in this release. In the previous versions of Dynamics CRM, a BizTalk "send" adapter was available for communicating with the platform. Dynamics CRM 2011 no longer ships with an adapter and developers are encouraged to use the WCF endpoints exposed by the product. Dynamics CRM has both a WCF REST and SOAP endpoint. The REST endpoint can only be used within the CRM application itself. For instance, you can build what is called a web resource that is embedded in a Dynamics CRM page. This resource could be a Microsoft Silverlight or HTML page that looks up data from three different Dynamics CRM entities and aggregates them on the page. This web resource can communicate with the Dynamics CRM REST API, which is friendly to JavaScript clients. Unfortunately, you cannot use the REST endpoint from outside of the Dynamics CRM environment, but because BizTalk cannot communicate with REST services, this has little impact on the BizTalk integration story. The Dynamics CRM SOAP API, unlike its ASMX web service predecessor, is static and operates with a generic Entity data structure. Instead of having a dynamic WSDL that exposes typed definitions for all of the standard and custom entities in the system, the Dynamics CRM 2011 SOAP API has a set of operations (for example, Create, Retrieve) that function with a single object type. The Entity object has a property identifying which concrete object it represents (for example, Account or Contract), and a name/value pair collection that represents the columns and values in the object it represents. For instance, an Entity may have a LogicalName set to "Account" and columns for "telephone1", "emailaddress", and "websiteurl." In essence, this means that we have two choices when interacting with Dynamics CRM 2011 from BizTalk Server. Our first option is to directly consume and invoke the untyped SOAP API. Doing this involves creating maps from a canonical schema to the type-less Entity schema. In the case of doing a Retrieve operation, we may also have to map the type-less Entity message back to a structured message for more processing. Below, we will walk through an example of this. The second option involves creating a typed proxy service for BizTalk Server to invoke. Dynamics CRM has a feature-rich Solution Development Kit (SDK) that allows us to create typed objects and send them to the Dynamics CRM SOAP endpoint. This proxy service will then expose a typed interface to BizTalk that operates as desired with a strongly typed schema. An upcoming exercise demonstrates this scenario. Which choice is best? For simple solutions, it may be fine to interact directly with the Dynamics CRM 2011 SOAP API. If you are updating a couple fields on an entity, or retrieving a pair of data values, the messiness of the untyped schema is worth the straightforward solution. However, if you are making large scale changes to entities, or getting back an entire entity and publishing to the BizTalk bus for more subscribers to receive, then working strictly with a typed proxy service is the best route. However, we will look at both scenarios below, and you can make that choice for yourself. Integrating Directly with the Dynamics CRM 2011 SOAP API In the following series of steps, we will look at how to consume the native Dynamics CRM SOAP interface in BizTalk Server. We will first look at how to query Dynamics CRM to return an Entity. After that, we will see the steps for creating a new Entity in Dynamics CRM. Querying Dynamics CRM from BizTalk Server In this scenario, BizTalk Server will request details about a specific Dynamics CRM "contact" record and send the result of that inquiry to another system.
Read more
  • 0
  • 0
  • 4822

article-image-getting-started-javafx
Packt
05 Oct 2010
11 min read
Save for later

Getting Started with JavaFX

Packt
05 Oct 2010
11 min read
  JavaFX 1.2 Application Development Cookbook Over 60 recipes to create rich Internet applications with many exciting features Easily develop feature-rich internet applications to interact with the user using various built-in components of JavaFX Make your application visually appealing by using various JavaFX classes—ListView, Slider, ProgressBar—to display your content and enhance its look with the help of CSS styling Enhance the look and feel of your application by embedding multimedia components such as images, audio, and video Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on JavaFX, see here.) Using javafxc to compile JavaFX code While it certainly makes it easier to build JavaFX with the support of an IDE (see the NetBeans and Eclipse recipes), it is not a requirement. In some situations, having direct access to the SDK tools is preferred (automated build for instance). This recipe explores the build tools that are shipped with the JavaFX SDK and provides steps to show you how to manually compile your applications. Getting ready To use the SDK tools, you will need to download and install the JavaFX SDK. See the recipe Installing the JavaFX SDK, for instructions on how to do it. How to do it... Open your favorite text/code editor and type the following code. The full code is available from ch01/source-code/src/hello/HelloJavaFX.fx. package hello;import javafx.stage.Stage;import javafx.scene.Sceneimport javafx.scene.text.Text;import javafx.scene.text.Font;Stage { title: "Hello JavaFX" width: 250 height: 80 scene: Scene { content: [ Text { font : Font {size : 16} x: 10 y: 30 content: "Hello World!" } ] }} Save the file at location hello/Main.fx. To compile the file, invoke the JavaFX compiler from the command line from a directory up from the where the file is stored (for this example, it would be executed from the src directory): javafxc hello/Main.fx If your compilation command works properly, you will not get any messages back from the compiler. You will, however, see the file HelloJavaFX.class created by the compiler in the hello directory. If, however, you get a "file not found" error during compilation, ensure that you have properly specified the path to the HelloJavaFX.fx file. How it works... The javafxc compiler works in similar ways as your regular Java compiler. It parses and compiles the JavaFX script into Java byte code with the .class extension. javafxc accepts numerous command-line arguments to control how and what sources get compiled, as shown in the following command: javafxc [options] [sourcefiles] [@argfiles] where options are your command-line options, followed by one or more source files, which can be followed by list of argument files. Below are some of the more commonly javafxc arguments: classpath (-cp)—the classpath option specifies the locations (separated by a path separator character) where the compiler can find class files and/or library jar files that are required for building the application. javafxc -cp .:lib/mylibrary.jar MyClass.fx sourcepath—in more complicated project structure, you can use this option to specify one or more locations where the compiler should search for source file and satisfy source dependencies. javafxc -cp . -sourcepath .:src:src1:src2 MyClass.fx -d—with this option, you can set the target directory where compiled class files are to be stored. The compiler will create the package structure of the class under this directory and place the compiled JavaFX classes accordingly. javafxc -cp . -d build MyClass.fx The @argfiles option lets you specify a file which can contain javafxc command-line arguments. When the compiler is invoked and a @argfile is found, it uses the content of the file as an argument for javafxc. This can help shorten tediously long arguments into short, succinct commands. Assume file cmdargs has the following content: -d build-cp .:lib/api1.jar:lib/api2.jar:lib/api3.jar-sourcepath core/src:components/src:tools/src Then you can invoke javafxc as: $> javafxc @cmdargs See also Installing the JavaFX SDK Creating and using JavaFX classes JavaFX is an object-oriented scripting language. As such, object types, represented as classes, are part of the basic constructs of the language. This section shows how to declare, initialize, and use JavaFX classes. Getting ready If you have used other scripting languages such as ActionScript, JavaScript, Python, or PHP, the concepts presented in this section should be familiar. If you have no idea what a class is or what it should be, just remember this: a class is code that represents a logical entity (tree, person, organization, and so on) that you can manipulate programmatically or while using your application. A class usually exposes properties and operations to access the state or behavior of the class. How to do it... Let's assume we are building an application for a dealership. You may have a class called Vehicle to represent cars and other type of vehicles processed in the application. The next code example creates the Vehicle class. Refer to ch01/source-code/src/javafx/Vehicle.fx for full listing of the code presented here. Open your favorite text editor (or fire up your favorite IDE). Type the following class declaration: class Vehicle { var make; var model; var color; var year; function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!") }} Once your class is properly declared, it is now ready to be used. To use the class, add the following (highlighted code) to the file: class Vehicle {...}var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"};vehicle.drive(); Save the file as Vehicle.fx. Now, from the command-line, compile it with: $> javafxc Vehicle.fx If you are using an IDE, you can simply right, click on the file to run it. When the code executes, you should see: $> You are driving a 2010 Grey Mini Cooper! How it works... The previous snippet shows how to declare a class in JavaFX. Albeit a simple class, it shows the basic structure of a JavaFX class. It has properties represented by variables declarations: var make;var model;var color;var year; and it has a function: function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!")} which can update the properties and/or modify the behavior (for details on JavaFX functions, see the recipe Creating and Using JavaFX functions). In this example, when the function is invoked on a vehicle object, it causes the object to display information about the vehicle on the console prompt. Object literal initialization Another aspect of JavaFX class usage is object declaration. JavaFX supports object literal declaration to initialize a new instance of the class. This format lets developers declaratively create a new instance of a class using the class's literal representation and pass in property literal values directly into the initialization block to the object's named public properties. var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"}; The previous snippet declares variable vehicle and assigns to it a new instance of the Vehicle class with year = 2010, color = Grey, make = Mini, and model = Cooper. The values that are passed in the literal block overwrite the default values of the named public properties. There's more... JavaFX class definition mechanism does not support a constructor as in languages such as Java and C#. However, to allow developers to hook into the life cycle of the object's instance creation phase, JavaFX exposes a specialized code block called init{} to let developers provide custom code which is executed during object initialization. Initialization block Code in the init block is executed as one of the final steps of object creation after properties declared in the object literal are initialized. Developers can use this facility to initialize values and initialize resources that the new object will need. To illustrate how this works, the previous code snippet has been modified with an init block. You can get the full listing of the code at ch01/source-code/src/javafx/Vehicle2.fx. class Vehicle {... init { color = "Black"; } function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!"); }}var vehicle = Vehicle { year:2010 make:"Mini" model:"Cooper"};vehicle.drive(); Notice that the object literal declaration of object vehicle no longer includes the color declaration. Nevertheless, the value of property color will be initialized to Black in the init{} code block during the object's initialization. When you run the application, it should display: You are driving a 2010 Black Mini Cooper! See also Declaring and using variables in JavaFX Creating and using JavaFX functions Creating and using variables in JavaFX JavaFX is a statically type-safe and type-strict scripting language. Therefore, variables (and anything which can be assigned to a variable, including functions and expressions) in JavaFX, must be associated with a type, which indicates the expected behavior and representation of the variable. This sections explores how to create, initialize, and update JavaFX variables. Getting ready Before we look at creating and using variables, it is beneficial to have an understanding of what is meant by data type and be familiar with some common data types such as String, Integer, Float, and Boolean. If you have written code in other scripting languages such as ActionScript, Python, and Ruby, you will find the concepts in this recipe easy to understand. How to do it... JavaFX provides two ways of declaring variables including the def and the var keywords. def X_STEP = 50;prntln (X_STEP);X_STEP++; // causes errorvar x : Number;x = 100;...x = x + X_LOC; How it works... In JavaFX, there are two ways of declaring a variable: def—The def keyword is used to declare and assign constant values. Once a variable is declared with the def keyword and assigned a value, it is not allowed be reassigned a new value. var—The var keyword declares variables which are able to be updated at any point after their declaration. There's more... All variables must have an associated type. The type can be declared explicitly or be automatically coerced by the compiler. Unlike Java (similar to ActionScript and Scala), the type of the variable follows the variable's name separated by a colon. var location:String; Explicit type declaration The following code specifies the type (class) that the variable will receive at runtime: var location:String;location = "New York"; The compiler also supports a short-hand notation that combines declaration and initialization. var location:String = "New York"; Implicit coercion In this format, the type is left out of the declaration. The compiler automatically converts the variable to the proper type based on the assignment. var location;location = "New York"; Variable location will automatically receive a type of String during compilation because the first assignment is a string literal. Or, the short-hand version: var location = "New York"; JavaFX types Similar to other languages, JavaFX supports a complete set of primitive types as listed: :String—this type represents a collection of characters contained within within quotes (double or single, see following). Unlike Java, the default value for String is empty (""). "The quick brown fox jumps over the lazy dog" or 'The quick brown fox jumps over the lazy dog' :Number—this is a numeric type that represents all numbers with decimal points. It is backed by the 64-bit double precision floating point Java type. The default value of Number is 0.0. 0.01234100.01.24e12 :Integer—this is a numeric type that represents all integral numbers. It is backed by the 32-bit integer Java type. The default value of an Integer is 0. -44700xFF :Boolean—as the name implies, this type represents the binary value of either true or false. :Duration—this type represent a unit of time. You will encounter its use heavily in animation and other instances where temporal values are needed. The supported units include ms, s, m, and h for millisecond, second, minute, and hour respectively. 12ms4s12h0.5m :Void—this type indicates that an expression or a function returns no value. Literal representation of Void is null. Variable scope Variables can have three distinct scopes, which implicitly indicates the access level of the variable when it is being used. Script level Script variables are defined at any point within the JavaFX script file outside of any code block (including class definition). When a script-level variable is declared, by default it is globally visible within the script and is not accessible from outside the script (without additional access modifiers). Instance level A variable that is defined at the top-level of a class is referred to as an instance variable. An instance level is visible within the class by the class members and can be accessed by creating an instance of the class. Local level The least visible scope are local variables. They are declared within code blocks such as functions. They are visible only to members within the block.
Read more
  • 0
  • 0
  • 4800
article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 4788

article-image-introduction-jsf-part-1
Packt
30 Dec 2009
6 min read
Save for later

An Introduction to JSF: Part 1

Packt
30 Dec 2009
6 min read
While the main focus of this article is learning how to use JSF UI components, and not to cover the JSF framework in complete detail, a basic understanding of fundamental JSF concepts is required before we can proceed. Therefore, by way of introduction, let's look at a few of the building blocks of JSF applications: the Model-View-Controller architecture, the JSF request processing lifecycle, managed beans, EL expressions, UI components, converters, validators, and internationalization (I18N). The Model-View-Controller architecture Like many other web frameworks, JSF is based on the Model-View-Controller (MVC) architecture. The MVC pattern promotes the idea of “separation of concerns”, or the decoupling of the presentation, business, and data access tiers of an application. The Model in MVC represents “state” in the application. This includes the state of user interface components (for example: the selection state of a radio button group, the enabled state of a button, and so on) as well as the application’s data (the customers, products, invoices, orders, and so on). In a JSF application, the Model is typically implemented using Plain Old Java Objects (POJOs) based on the JavaBeans API. These classes are also described as the “domain model” of the application, and act as Data Transfer Objects (DTOs) to transport data between the various tiers of the application. JSF enables direct data binding between user interface components and domain model objects using the Expression Language (EL), greatly simplifying data transfer between the View and the Model in a Java web application. The View in MVC represents the user interface of the application. The View is responsible for rendering data to the user, and for providing user interface components such as labels, text fields, buttons, radios, and checkboxes that support user interaction. As users interact with components in the user interface, events are fired by these components and delivered to Controller objects by the MVC framework. In this respect, JSF has much in common with a desktop GUI toolkit such as Swing or AWT. We can think of JSF as a GUI toolkit for building web applications. JSF components are organized in the user interface declaratively using UI component tags in a JSF view (typically a JSP or Facelets page). The Controller in MVC represents an object that responds to user interface events and to query or modify the Model. When a JSF page is displayed in the browser, the UI components declared in the markup are rendered as HTML controls. The JSF markup supports the JSF Expression Language (EL), a scripting language that enables UI components to bind to managed beans for data transfer and event handling. We use value expressions such as #{backingBean.name} to connect UI components to managed bean properties for data binding, and we use method expressions such as #{backingBean.sayHello} to register an event handler (a managed bean method with a specific signature) on a UI component. In a JSF application, the entity classes in our domain model act as the Model in MVC terms, a JSF page provides the View, and managed beans act as Controller objects. The JSF EL provides the scripting language necessary to tie the Model, View, and Controller concepts together. There is an important variation of the Controller concept that we should discuss before moving forward. Like the Struts framework, JSF implements what is known as the “Front Controller” pattern, where a single class behaves like the primary request handler or event dispatcher for the entire system. In the Struts framework, the ActionServlet performs the role of the Front Controller, handling all incoming requests and delegating request processing to application-defined Action classes. In JSF, the FacesServlet implements the Front Controller pattern, receiving all incoming HTTP requests and processing them in a sophisticated chain of events known as the JSF request processing lifecycle. The JSF Request Processing Lifecycle In order to understand the interplay between JSF components, converters, validators, and managed beans, let’s take a moment to discuss the JSF request processing lifecycle. The JSF lifecycle includes six phases: Restore/create view – The UI component tree for the current view is restored from a previous request, or it is constructed for the first time. Apply request values – The incoming form parameter values are stored in server-side UI component objects. Conversion/Validation – The form data is converted from text to the expected Java data types and validated accordingly (for example: required fields, length and range checks, valid dates, and so on). Update model values – If conversion and validation was successful, the data is now stored in our application’s domain model. Invoke application – Any event handler methods in our managed beans that were registered with UI components in the view are executed. Render response – The current view is re-rendered in the browser, or another view is displayed instead (depending on the navigation rules for our application). To summarize the JSF request handling process, the FacesServlet (the Front Controller) first handles an incoming request sent by the browser for a particular JSF page by attempting to restore or create for the first time the server-side UI component tree representing the logical structure of the current View (Phase 1). Incoming form data sent by the browser is stored in the components such as text fields, radio buttons, checkboxes, and so on, in the UI component tree (Phase 2). The data is then converted from Strings to other Java types and is validated using both standard and custom converters and validators (Phase 3). Once the data is converted and validated successfully, it is stored in the application’s Model by calling the setter methods of any managed beans associated with the View (Phase 4). After the data is stored in the Model, the action method (if any) associated with the UI component that submitted the form is called, along with any other event listener methods that were registered with components in the form (Phase 5). At this point, the application’s logic is invoked and the request may be handled in an application-defined way. Once the Invoke Application phase is complete, the JSF application sends a response back to the web browser, possibly displaying the same view or perhaps another view entirely (Phase 6). The renderers associated with the UI components in the view are invoked and the logical structure of the view is transformed into a particular presentation format or markup language. Most commonly, JSF views are rendered as HTML using the framework’s default RenderKit, but JSF does not require pages to be rendered only in HTML. In fact, JSF was designed to be a presentation technology neutral framework, meaning that views can be rendered according to the capabilities of different client devices. For example, we can render our pages in HTML for web browsers and in WML for PDAs and wireless devices.
Read more
  • 0
  • 0
  • 4729