Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-webrtc-sip-and-ims
Packt
29 Oct 2014
28 min read
Save for later

WebRTC with SIP and IMS

Packt
29 Oct 2014
28 min read
In this article by Altanai Bisht, the author of the book, WebRTC Integrator's Guide, has discussed about the interaction of WebRTC client with important IMS nodes and modules. IP Multimedia Subsystem (IMS) is an architectural framework for IP Multimedia communications and IP telephony based on Convergent applications. It specifies three layers in a telecom network: Transport or Access layer: This is the bottom-most segment responsible for interacting with end systems such as phones. IMS layer: This is the middleware responsible for authenticating and routing the traffic and facilitating call control through the Service layer. Service or Application layer: This is the top-most layer where all of the call control applications and Value Added Services (VAS) are hosted. (For more resources related to this topic, see here.) IMS standards are defined by Third Generation Partnership Project (3GPP) which adopt and promote Internet Engineering Task Force (IETF) Request for Comments (RFCs). Refer to http://www.3gpp.org/technologies/keywords-acronyms/109-ims to learn more about 3GPP IMS specification releases. This article will walk us through the interaction of WebRTC client with important IMS nodes and modules. The WebRTC gateway is the first point of contact for the SIP requests from the WebRTC client to enter into the IMS network. The WebRTC gateway converts SIP over WebSocket implementation to legacy/plain SIP, that is, a WebRTC to SIP gateway that connects to the IMS world and is able to communicate with a legacy SIP environment. It also can translate other REST- or JSON-based signaling protocols into SIP. The gateway also handles the media operation that involves DTLS, SRTP, RTP, transcoding, demuxing, and so on. In this article, we will study a case where there exists a simple IMS core environment, and the WebRTC clients are meant to interact after the signals are traversed through core IMS nodes such as Call Session Control Function (CSCF), Home Subscriber Server (HSS), and Telecom Application Server (TAS). The Interaction with core IMS nodes This section describes the sequence of steps that must be followed for the integration of the WebRTC client with IMS. Before you go ahead, set up a Session Border Controller (SBC) / WebRTC gateway / SIP proxy node for the WebRTC client to interact with the IMS control layer. Direct the control towards the CSCF nodes of IMS, namely, Proxy-CSCF, Interrogating-CSCF, and Serving-CSCF. The subscriber details and the location are updated in the HSS. Serving-CSCF (SCSCF) routes the call through the SIP Application Server to invoke any services before the call is processed. The Application Server, which is part of the IMS service layer, is the point of adding logic to call processing in the form of VAS. Additionally, we will uncover the process of integrating media server for an inter-codec conversion between legacy SIP phones and WebRTC clients. The setup will allow us to support all SIP nodes and endpoints as part of the IMS land-scape. The following figure shows the placement of the SIPWS to SIP gateway in the IMS network: The WebRTC client is a web-based dynamic application that is run over a Web Application Server. For simplification, we can club the components of the WebRTC client and the Web Application Server together and address them jointly as the WebRTC client, as shown in the following diagram: There are four major components of the OpenIMS core involved in this setup as described in the following sections. Along with these, two components of the WebRTC infrastructure (the client and the gateway) are also necessary to connect the WebRTC endpoints. Three optional entities are also described as part of this setup. The components of Open IMS are CSCF nodes and HSS. More information on each component is given in the following sections. The Call Session Control Function The three parts of CSCF are described as follows: Proxy-CSCF (P-CSCF) is the first point of contact for a user agent (UA) to which all user equipments (UEs) are attached. It is responsible for routing an incoming SIP request to other IMS nodes, such as registrar and Policy and Charging Rules Function (PCRF), among others. Interrogating-CSCF (I-CSCF) is the inbound SIP proxy server for querying the HSS as to which S-CSCF should be serving the incoming request. Serving-CSCF (S-CFCS) is the heart of the IMS core as it enables centralized IMS service control by defining routing paths that act like the registrar, interact with the Media Server, and much more. Home Subscriber System IMS core Home Subscriber System (HSS) is the database component responsible for maintaining user profiles, subscriptions, and location information. The data is used in functions such as authentication and authorization of users while using IM services. The components of the WebRTC infrastructure primarily comprises of WebRTC Web Application Servers, WebRTC web-based clients, and the SIP gateway. WebRTC Web Application Server and client: The WebRTC client is intrinsically a web application that is composed of user interfaces, data access objects, and controllers to handle HTTP requests. A Web Application Server is where an application is hosted. As WebRTC is a browser-based technique, it is meant to be an HTML-based web application. The call functionalities are rendered through the SIP JavaScript files. The browser's native WebRTC capabilities are utilized to capture and transmit the data. A WebRTC service provider must embed the SIP call functions on a web page that has a call interface. It must provide values for the To and From SIP addresses, div to play audio/video content, and access to users' resources such as camera, mic, and speakers. WebRTC to IMS gateway: This is the point where the conversion of the signal from SIP over WebSockets to legacy/plain SIP takes place. It renders the signaling into a state that the IMS network nodes can understand. For media, it performs the transcoding from WebRTC standard codecs to others. It also performs decryption and demux of audio/video/RTCP/RTP. There are other servers that act as IMS nodes as well, such as the STUN/TURN Server, Media Server, and Application Server. They are described as follows: STUN/TURN Server: These are employed for NAT traversals and overcoming firewall restrictions through ICE candidates. They might not be needed when the WebRTC client is on the Internet and the WebRTC gateway is also listening on a publicly accessible IP. Media Server: Media server plays a role when media relay is required between the UEs instead of a direct peer-to-peer communication. It also comes into picture for services such as voicemail, Interactive Voice Response (IVR), playback, and recording. Application Server (AS): Application Server is the point where developers can make customized logic for call control such as VAS in the form of call redirecting in cases when the receiver is absent and selective call screening. The IP Multimedia Subsystem core IMS is an architecture for real-time multimedia (voice, data, video, and messaging) services using a common IP network. It defines a layered architecture. According to the 3GPP specification, IMS entities are classified into six categories: Session management and route (CSCF, GGSN, and SGSN) Database (HSS and SLF) Interworking elements (BGCF, MGCF, IM-MGW, and SGW) Service (Application Server, MRFC and MRFP) Strategy support entities (PDF) Billing Interoperability with the SIP infrastructure requires a session border controller to decrypt the WebRTC control and media flows. A media node is also set up for transcoding between WebRTC codecs and other legacy phones. When a gateway is involved, the WebRTC voice and video peer connections are between the browser and the border controller. In our case, we have been using Kamailio in this role. Kamailio is an open source SIP server capable of processing both SIP and SIPWS signaling. As WebRTC is made to function over SIP-based signaling, it is applicable to enjoy all of the services and solutions made for the IMS environment. The telecom operators can directly mount the services in the Service layer, and subscribers can avail the services right from their web browsers through the WebRTC client. This adds a new dimension to user accessibility and experience. A WebRTC client's true potential will come into effect only when it is integrated with the IMS framework. We have some readymade, open IMS setups that have been tested for WebRTC-to-IMS integration. The setups are as follows: 3GPP IMS: This is the IMS specification by 3GPP, which is an association of telecommunications group OpenIMS: This is the open source implementation of the IMS CSCFs and a lightweight HSS for the IMS core DubangoIMS: This is the cross-platform and open source 3GPP IMS/LTE framework KamailioIMS: Kamailio Version 4.0 and above incorporates IMS support by means of OpenIMS We can also use any other IMS structure for the integration. In this article, we will demonstrate the use of OpenIMS. For this, it is required that a WebRTC client and a non-WebRTC client must be interoperable by means of signaling and media transcoding. Also, the essential components of IMS world, such as HSS, Media Server, and Application Server, should be integrated with the WebRTC setup. The OpenIMS Core The Open IMS Core is an open source implementation for core elements of the IMS network that includes IMS CSCFs nodes and HSS. The following diagram shows how a connection is made from WebRTC to CSCF: The following are the prerequisites to install the Open IMS core: Make sure that you have the following packages installed on your Linux machine, as their absence can hinder the IMS installation process: Git and Subversion GCC3/4, Make, JDK1.5, Ant MySQL as the database Bison and Flex, the Linux utilities libxml2 (Version 2.6 and above) and libmysql with development versions Install these packages from the Synaptic package manager or using the command prompt. For the LoST interface of E-CSCF, use the following command lines: sudo apt-get install mysql-server libmysqlclient15-dev libxml2libxml2-dev bind9 ant flex bison curl libcurl4-gnutls-dev sudo apt-get install curl libcurl4-gnutls-dev The Domain Name Server (DNS), bind9, should be installed and run. To do this, we can run the following command line: sudo apt-get install bind9 We need a web browser to review the status of the connection on the web console. To download a web browser, go to its download page. For example, Chrome can be downloaded from https://www.google.com/intl/en_in/chrome/browser/. We must verify that the Java version installed is above 1.5 so as to not break the compilation process in between, and set the path of JAVA_HOME as follows: export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/jre The output of the command line that checks the Java version is as follows: The following are the steps to install OpenIMS. As the source code is preconfigured to work from a standard file path of /opt, we will use the predefined directory for installation. Go to the /opt folder and create a directory to store the OpenIMS core, using the following command lines: mkdir /opt/OpenIMSCorecd /opt/OpenIMSCore Create a directory to store FHOSS, check out the HSS, and compile the source using the following command lines: mkdir FHoSS svn checkout http://svn.berlios.de/svnroot/repos/openimscore/FHoSS/trunk FHoSS cd FHoSS ant compile deploy Note that the code requires Java Version 7 or lower to work. Also, create a directory to store ser_ims, check out the CFCs, and then install ser_ims using the following command lines: mkdir ser_ims svn checkout http://svn.berlios.de/svnroot/repos/openimscore/ser_ims/trunk ser_ims cd ser_ims make install-libs all After downloading and installing the OpenIMS installation directory, its contents are as follows: By default, the nodes are configured to work only on the local loopback, and the default domain configured is open-ims.test. The MySQL access rights are also set only for local access. However, this can be modified using the following steps: Run the following command line: ./opt/ser_ims/cfg/configurator.sh Replace 127.0.0.1 (the default IP for the local host) with the new IP address that is required to configure the IMS Core server. Replace the home domain (open-ims.test) with the required domain name. Change the database passwords. The following figure depicts the domain change process through configurator.sh: To resolve the domain name, we need to add a new IMS domain to bind the configuration directory. Change to the system's bind folder (cd /etc/bind) and copy the open-ims.dnszone file there after replacing the domain name. sudo cp /opt/OpenIMSCore/ser_ims/cfg/open-ims.dnszone /etc/bind/ Open the name.conf file and include open-ims.dnszone in the list that already exists: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; include "/etc/bind/open-ims.dnszone"; One can also add a reverse zone file, which, contrary to the DNS zone file, converts an address to a name. Restart the naming server using the following command: sudo bind9 restart On occasion of any failure or error note, the system logs/reports can be generated using the following command line: tail -f /var/log/syslog Open the MySQL client (sudo mysql) and add the SQL scripts for the creation of database and tables for HSS operations: mysql -u root -p -h localhost<ser_ims/cfg/icscf.sql mysql -u root -p -h localhost<FHoSS/scripts/hss_db.sql mysql -u root -p -h localhost<FHoSS/scripts/userdata.sql The following screenshot shows the tables for the HSS database: Users should be registered with a domain (that is, one needs to make changes in the userdata.sql file by replacing the default domain name with the required domain name). Note that while it is not mandatory to change the domain, it is a good practice to add a new domain that describes the enterprise or service provider's name. The following screenshot shows user domains changed from the default to the personal domain: Copy the pcscf.cfg, pcscf.sh, icscf.cfg, icscf.xml, icscf.sh, scscf.cfg, scscf.xml, and scscf.sh files to the /opt/OpenIMSCore location. Start the Policy Call Session Control Function (PCSCF) by executing the pcscf.sh script. The default element port assigned for P-CSCF is 4060. A screenshot of the running of PCSCF is as follows: Start the Interrogating Call Session Control Function (I-CSCF) by executing the icscf.sh script. The default element port assigned to I-CSCF is 5060. If the scripts display a warning about connection, it is just because the FHoSS client still needs to be started. A screenshot of the running I-CSCF is as follows: Start SCSCF by executing the scscf.sh script. The default element port assignment for S-CSCF is 6060. A screenshot of the running SCSCF is as follows: Start the FOKUS Home Subscriber Server (FHoSS) by executing FHoss/deploy/startup.sh. The HSS interacts using the diameter protocol. The ports used for this protocol are 3868, 3869, and 3870. A screenshot of the running HSS is shown as follows: Go to http://<yourip>:8080 and log in to the web console with hssAdmin as the username and hss as the password as shown in the following screenshot. To register the WebRTC client with OpenIMS, we must use an IMS gateway that performs the function of converting the SIP over WebSocket format to SIP. In order to achieve this, use the IP port or domain of the PCSCF node while registering the client. The flow will be from the WebRTC client to the IMS gateway to the PCSCF of the IMS Core. The flow can also be from the SIPML5 WebRTC client to the webrtc2sip gateway to the PCSCF of the OpenIMS Core. The subscribers are visible in the IMS subscription section of the portal of OpenIMS. The following screenshot shows the user identities and their statuses on a web-based admin console: As far as other components are concerned, they can be subsequently added to the core network over their respective interfaces. The Telecom server The TAS is where the logic for processing a call resides. It can be used to add applications such as call blocking, call forwarding, and call redirection according to the predefined values. The inputs can be assigned at runtime or stored in a database using a suitable provisioning system. The following diagram shows the connection between WebRTC and the IMS Core Server: For demonstration purposes, we can use an Application Server that can host SIP servlets and integrate them with IMS core. The Mobicents Telecom Application Server Mobicents SIP Servlet and Java APIs for Integrated Networks-Service Logic Execution Environment (JAIN-SLEE) are open platforms to deploy new call controller logic and other converged applications. The steps to install Mobicents TAS are as follows: Download the SIP Application Server logic package from https://code.google.com/p/sipservlets/wiki/Downloads. Unzip the contents. Make sure that the Java environment variables are in place. Start the JBoss container from mobicentsjboss-5.1.0.GAbin In case of MS Windows, click on run.bat, and for Linux, click on run.sh. The following figure displays the traces on the console when the server is started on JBoss: The Mobicents application can also be developed by installing the Tomcat/Mobicents plugin in Eclipse IDE. The server can also be added for Mobicents instance, enabling quick deployment of applications. Open the web console to review the settings. The following screenshot displays the process: In order to deploy Resource Adaptors, enter: ant -f resources/<name of resource adapter>/build.xml deploy To undeploy the resource adapters, execute antundeploy with the name of the resource adapter: ant -f resources/<name of resource adapter>/build.xml undeploy Make sure that you have Apache Ant 1.7. The deployed instances should be visible in a web console as follows: To deploy and run SIP Servlet applications, use the following command line: ant -f examples/<name of application directory>/build.xml deploy-all Configure CSCF to include the Application Server in the path of every incoming SIP request and response. With the introduction of TAS, it is now possible to provide customized call control logic to all subscribers or particular subscribers. The SIP solution and services can range from simple activities, such as call screening and call rerouting, to a complex call-handling application, such as selective call screening based on the user's calendar. Some more examples of SIP applications are given as follows: Speed Dial: This application lets the user make a call using pre-programmed numbers that map to actual SIPURIs of users. Click to Dial: This application makes a call using a web-based GUI. However, it is very different from WebRTC, as it makes/receives the call through an external SIP phone. Find me Follow Me: This application is beneficial if the user is registered on multiple devices simultaneously, for example, SIP phone, X-Lite, and WebRTC. In such a case, when there is an incoming call, each of the user's devices rings for few seconds in order of their recent use so that the user can pick the call from the device that is nearest to him. These services are often referred to as VAS, which can be innovative and can take the user experience to new heights. The Media Server To enable various features such as Interactive Voice Respondent (IVR), record voice mails, and play announcements, the Media Server plays a critical role. The Media Server can be used as a standalone entity in the WebRTC infrastructure or it can be referenced from the SIP server in the IMS environment. The FreeSWITCH Media Server FreeSWITCH has powerful Media Server capabilities, including those for functions such as IVR, conferencing, and voice mails. We will first see how to use FreeSWITCH as a standalone entity that provides SIP and RTP proxy features. Let's try to configure and install a basic setup of FreeSWITCH Media Server using the following steps: Download and store the source code for compilation in the /usr/src folder, and run the following command lines: cd usr/src git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git A directory named freeswitch is made using the following command line and binaries will be stored in this folder. Assign all permissions to it. sudo chown -R <username> /usr/local/freeswitch Replace <username> with the name of the user who has the ownership of the folder. Go to the directory where the source will be stored, that is, the following directory: cd /usr/src/freeswitch Then, run bootstrap using the following command line: ./bootstrap.sh One can add additional modules by editing the configuration file using the vi editor. We can open our file using the following command line: vi modules.conf The names of the module are already listed. Remove the # symbol before the name to include the module at runtime, and add # to skip the module. Then, run the configure command: ./configure --enable-core-pgsql-support Use the make command and install the components: make && make install Go to the Sofia profile and uncomment the parameters defined for WebSocket binding. By doing so, the WebRTC clients can register with FreeSWITCH on port 443. Sofia is an SIP stack used by FreeSWITCH. By default, it supports only pure SIP requests. To get WebRTC clients, register with FreeSWITCH's SIP Server. <!-- uncomment for SIP over WebSocket support --><!-- <param name="ws-binding" value=":443"/> Install the sound files using the following command line: make all cd-sounds-install cd-moh-install Go to the installation directory, and in the vars.xml file under freeswitch/conf/ make sure that the codec preferences are set as follows: <X-PRE-PROCESS cmd="set" data="global_codec_prefs=G722,PCMA,PCMU,GSM"/> <X-PRE-PROCESS cmd="set" data="outbound_codec_prefs=G722,PCMA,PCMU,GSM"/> Make sure that the SIP profile is directly using the codec values as follows: <param name="inbound-codec-prefs" value="$${global_codec_prefs}"/> <param name="outbound-codec-prefs" value="$${global_codec_prefs}"/> We can later add more codecs such as vp8 for video calling/conferencing. To start FreeSWITCH, go to the /freeswitch/bin installation directory and run FreeSWITCH. Run the command-line console that will be used to control and monitor the passing SIP packets by going to the /freeswitch/bin installation directory and executing fs_cli. The following is the screenshot of the FreeSWITCH client console: Go to the /freeswitch/conf/SIP_profile installation-directory and look for the existing configuration files. Load and start the SIP profile using the following command line: sofia profile <name of profile> start load Restart and reload the profile in case of changes using the following command line: sofia profile <name of profile>restart reload Check its working by executing the following command line: Sofia status We can check the status of the individual SIP profile by executing the following command line: sofia status profile <name of profile> reg The preceding figure depicts the status of the users registered with the server at one point of time. Media Services The following steps outline the process of using the FreeSWITCH media services: Register the SIP softphone and WebRTC client using FreeSWITCH. Use sample values between 1000 and 1020 initially. Later, we can configure for more users as specified by the /freeswitch/conf/directory installation directory. The following are the sample values to register Kapanga:      Username: 1002      Display name: any      Domain/ Realm: 14.67.87.45      Outbound proxy: 14.67.87.45:5080      Authorization user: 1002      Password: 1234 The sample value for WebRTC client registration, if, for example, we decide to use the Sipml5webrtc client, for example, will be as follows:      Display name: any      Private identity: 1001      Public identity: SIP:[email protected]      Password: 1234      Realm: 14.67.87.45      WebSocket Server URL: ws://14.67.87.45:443 Note that the values used here are arbitrary for the purpose of understanding. IP denotes the public IP of the FreeSWITCH machine and the port is the WebSocket configured port in the Sofia profile. As seen in the following screenshot, it is required that we tick the Enable RTCWeb Breaker option in Expert settings to compensate for the incompatibility between the WebSocket and SIP standards that might arise: Make a call between the SIP softphone and WebRTC client. In this case, the signal and media are passing through FreeSWITCH as proxy. Call from a WebRTC client is depicted in the following screenshot, which consists of SIP messages passing through the FreeSWITCH server and are therefore visible in the FreeSWITCH client console. In this case, the server is operating in the default mode; other modes are bypass and proxy modes. Make a call between two WebRTC clients, where SIP and RTP are passing through FreeSWITCH as proxy. We can use other services of FreeSWITCH as well, such as voicemail, IVR, and conferencing. We can also configure this setup in such a way that media passes through the FreeSWITCH Media Server, and the SIP signaling is via the Telecom Kamailio SIP server. Use the RTP proxy in the SIP proxy server, in our case, Kamailio, to pass the RTP media through the Media Server. The RTP proxy module of Kamailio should be built in a format and configured in the kamailio.cfg file. The RTP proxy forces the RTP to pass through a node as specified in the settings parameters. It makes the communication between SIP user agents behind NAT and will also be used to set up a relaying host for RTP streams. Configure the RTP Engine as the media proxy agent for RTP. It will be used to force the WebRTC media through it and not in the old peer-to-peer fashion in which WebRTC is designed to operate. Perform the following steps to configure the RTP Engine: Go to the Kamailio installation directory and then to the RTPProxy module. Run the make command and install the proxy engine: cd rtpproxy ./configure && make Load the module and parameters in the kamailio.cfg file: listen=udp:<ip>:<port> .. loadmodule "rtpproxy.so" .. modparam("rtpproxy", "rtpproxy_sock",   "unix:/var/run/rtpproxy/rtpproxy.sock") Add rtpproxy_manage() for all of the requests and responses in the kamailio.cfg file. The example of rtpproxy_manage for INVITE is: if (is_method("INVITE")) { ... rtpproxy_manage(); ... }; Get the source code for the RTP Engine using git as follows: https://github.com/sipwise/rtpengine.git Go to the daemon folder in the installation directory and run the make command as follows: sudo make Start rtpengine in the default user space mode on the local machine: sudo ./rtpengine --ip=10.1.5.14 --listen-ng=12334 Check the status of rtpengine, which is running, using the following command: ps -ef|grep rtpengine Note that rtpengine must be installed on the same machine as the Kamailio SIP server. In case of the sipml5 client, after configuring the modules described in the preceding section and before making a call through the Media Server, the flow for the media will become one of the following:      In case of Voicemail/IVR, the flow is as follows:     WebRTC client to RTP proxy node to Media Server      In case of a call through media relay, the flow is as follows:     WebRTC client A to RTP proxy node to Media Server to RTP Proxy to WebRTC client B The following diagram shows the MediaProxy relay between WebRTC clients: The potential of media server lies in its media transcoding of various codecs. Different phones / call clients / softwares that support SIP as the signaling protocol do not necessarily support the same media codecs. In the situation where Media Server is absent and the codecs do not match between a caller and receiver, the attempt to make a call is abruptly terminated when the media exchange needs to take place, that is, after invite, success, response, and acknowledgement are sent. In the following figure, the setup to traverse media through the FreeSWITCH Media Server and signaling through the Kamailio SIP server is depicted: The role of the rtpproxyng engine is to enable media to pass via Media Server; this is shown in the following diagram: WebRTC over firewalls and proxies There are many complicated issues involved with the correct working of WebRTC across domains, NATS, geographies, and so on. It is important for now that the firewall of a system, or any kind of port-blocking policy, should be turned off to be able to make a successful audio-video WebRTC call across any two parties that are not on the same Local Area Network (LAN). For the user to not have to switch the firewall off, we need to configure the Simple Traversal of UDP through NAT (STUN) server or modify the Interactive Connectivity Establishment (ICE) parameter in the SDP exchanged. STUN helps in packet routing of devices behind a NAT firewall. STUN only helps in device discoverability by assigning publicly accessible addresses to devices within a private local network. Traversal Using Relay NAT (TURN) servers also serve to accomplish the task of inter-connecting the endpoints behind NAT. As the name suggests, TURN forces media to be proxied through the server. To learn more about ICE as a NAT-traversal mechanism, refer to the official document named RFC 5245. The ICE features are defined by sipML5 in the sipml.js file. It is added to SIP SDP during the initial phase of setting up the SIP stack. Snippets from the sipml.js file regarding ICE declaration are given as follows: var configuration = { ... websocket_proxy_url: 'ws://192.168.0.10:5060', outbound_proxy_url: 'udp://192.168.0.12:5060', ice_servers: [{ url: 'stun:stun.l.google.com:19302'}, {    url:'turn:[email protected]', credential:'myPassword'}], ... }; Under the postInit function in the call.htm page add the following function: oConfigCall = { ... events_listener: { events: '*', listener: onSipEventSession },    SIP_caps: [      { name: '+g.oma.SIP-im' },      { name: '+SIP.ice' },      { name: 'language', value: '"en,fr"' }    ] }; Therefore, the WebRTC client is able to reach the client behind the firewall itself; however, the media displays unpredicted behavior. In the need to create our own STUN-TURN server, you can take the help of RFC 5766, or you can refer to open source implementations, such as the project at the following site: https://code.google.com/p/rfc5766-turn-server/ When setting the parameters for WebRTC, we can add our own STUN/TURN server. The following screenshot shows the inputs suitable for ICE Servers if you are using your own TURN/STUN server: If there are no firewall restrictions, for example, if the users are on the same network without any corporate proxies and port blocks, we can omit the ICE by entering empty brackets, [], in the ICE Servers option on the Expert settings page in the WebRTC client. The final architecture for the WebRTC-to-IMS integration At the end of this article, we have arrived at an architecture similar to the following diagram. The diagram depicts a basic WebRTC-to-IMS architecture. The diagram depicts the WebRTC client in the Transport Layer as it is the user end-point. The IMS entities (CSCF and HSS), WebRTC to IMS gateway, and Media Server nodes are placed on the Network Control Layer as they help in signal and media routing. The applications for call control are placed in the top-most Application Layer that processes the call control logic. This architecture serves to provide a basic IMS-based setup for SIP-based WebRTC client interaction. Summary In this article, we saw how to interconnect the WebRTC setup with the IMS infrastructure. It included interaction with CSCF nodes, namely PCSCF, ICSCF, and SCSCF, after building and installing them from their sources. Also, FreeSWITCH Media Server was discussed, and the steps to build and integrate it were practiced. The Application Server to embed call control logic is Kamailio. NAT traversal via STUN / TURN server was also discussed and its importance was highlighted. To deploy the WebRTC solution integrated with the IMS network, we must ensure that all of the required IMS nodes are consulted while making a call, the values are reflected in the HSS data store, and the incoming SIP request and responses are routed via call logic of the Application Server before connecting a call. Resources for Article: Further resources on this subject: Using the WebRTC Data API [Article] Implementing Stacks using JavaScript [Article] Applying WebRTC for Education and E-learning [Article]
Read more
  • 0
  • 0
  • 6381

article-image-installing-dotproject
Packt
22 Oct 2009
8 min read
Save for later

Installing dotProject

Packt
22 Oct 2009
8 min read
This article will include: dotProject setup options including server, database, and browser issues Prerequisites for installation of the tool The process for control panels and browser-based installations Troubleshooting your installation Installing dotProject is usually an automated process if your server and database are already installed and configured. dotProject is packaged with an installation wizard that walks you through the basic setup process. It is always wise to have an understanding of the process and the setup options before you begin. Prerequisites It is important to make sure that everything is ready and in place for dotProject to be installed. Let's go over what we need to have prepared for a successful installation of dotProject. Before you Install It seems redundant to review the requirements again, doesn't it? There are a few last-minute things to discuss, especially if a control panel installation is not possible. First, make sure that the software required to run dotProject is already installed. Installing a web server, MySQL, and PHP is beyond the scope of this book. There are many fine books and online materials that explain the installation of web servers, MySQL, and PHP in detail. The dotProject team recommends the following environment: Apache web server (version 1.3.x or 2.x). MySQL server (version 3.23.x). A downloaded copy of dotProject. 2.0.4 or later is ideal. The most recent stable release can be downloaded from SourceForge. MySQL should be set up first, so that a dotProject user can create temporary tables during installation. Specifically, the database user should have ALTER and DROP permissions. In the section on browser-based installation, we will go over how to deal with the config.php file. If your installation already contains a config.php file (not a config_dist.php file, etc.), then dotProject will assume you are trying to upgrade. Your PHP installation should have register_globals set to OFF in order for dotProject to run in an optimized and more secure mode. The dotProject installer automatically detects the state of register_globals. dotProject will work with register_globals set to ON, but it is not recommended. LAMP, WAMP, or WIMP? There are several key requirements to run dotProject. You must have an active web server running PHP and MySQL, and an Internet browser. There are three main web-server setups that people running dotProject use. Which one you pick depends on what you already have and whether you have a preference for one over the other. If you use an Internet Service Provider (ISP) you may not have a choice on which to use. LAMP : Linux, Apache, MySQL, PHP WAMP : Windows, Apache, MySQL, PHP WIMP : Windows, IIS, MySQL, PHP LAMP is the most popular in the open-source community. Using LAMP provides an entirely open-source environment. Web Server Most web servers used today are either Apache or Microsoft IIS. Apache version 1.3.x or 2.x should be used. Your ISP or that clever person in the IT department knows which one your organization is using. There are always exceptions, so check the dotProject forums if you are using a different web server. Apache is the preferred environment for running dotProject. PHP To install dotProject 2.0, you must be using version 4.1 or higher of the very popular online programming language PHP. If you are using an Internet Service Provider, check your service details to see if PHP is provided. PHP can be downloaded from http://www.php.net/downloads.php. PHP 4.46 is the last stable version of PHP 4. PHP 5 is not recommended for use with version 2.0.4. MySQL dotProject uses the MySQL database system. You will need to have it installed before you begin as well. Version 3.23.x is recommended for use with dotProject. MySQL can be downloaded from http://www.mysql.org/downloads/. The dotProjectteam recommends that MySQL version 5 and above should not be used with version 2.0.4 of dotProject. The recent release of dotProject, version 2.1.0-rc 1 has been made more compatible with PHP 5 and MySQL 5; however, the changes incorporated does not take care of this completely. The features of this release are discussed in http://docs.dotproject.net/index.php/What%27s_New_-_2.1.0_-_rc1. Windows Using a bundled combination of PHP/Apache/MySQL is the best way to go if you do not already have them installed. This will save you the time and headache of installing them one at a time. The dotProject volunteers list the Apache2Triad available at http://apache2triad.sourceforge.net. Since there are limitations of dotProject being compatible with PHP5, version 1.2.3 is the download that is advisable. Browser dotProject works best with browsers that support cascading style sheets (CSS) and JavaScript. JavaScript and cookies should be turned on for full functionality. Most recent browsers such as Internet Explorer (version 5.5 or better), Mozilla 1.2, Netscape 7.x, and Firefox will work just fine. dotProject's PNG image files with alpha-transparency render best in Internet Explorer 6.0 and above. Internet Explorer 7 provides increased support for PNG image files. Mail Server As of version 2.0, sending mail is not a requirement. Administrators can set up the outgoing mail in the Administration panel. Fonts TrueType fonts are used for JpGraph, which is in turn used by the Gantt charts module. Most of the fonts JpGraph uses should already be installed on your system. All the fonts are not provided with dotProject because some of them have very specific licenses. If the Gantt charts module is insisting that font files are missing and you don't already have a spare copy of the files, search SourceForge or another reliable site for available fonts. Memory Limit The Gantt charts module can eat up your allocated memory. If the Gantt charts won't appear, and there is no error, chances are, you've reached your memory limit as set in the php.ini file. If your service is hosted, you will need to talk to your Internet Service Provider about increasing the memory limit set in your php.ini file. Installation There are two methods of dotProject installation: Online control panel installation Browser-based installation The most recent versions of dotProject, 2.0 and later, are not meant to be manually installed. The online control panel method is very simple and usually takes between five and ten minutes. The browser-based installation generally takes a little longer, roughly ten minutes to an hour. Which should you choose? If you already have an ISP who hosts your domain, they probably already provide you with an installation script for dotProject using one of the popular online control panels such as cPanel or Plesk. If they do not have the script available, they may be willing to install it for you if you make the request. dotProject can also be installed using a browser-based installation wizard. I recommend the online control panel installation for people who want a quick installation or are not technically inclined. The browser installation method is best for IT administrators or those who are comfortable installing web applications. If your only choice is a browser installation, don't worry; we will walk through one later in this article. Backup First It is always smart to take back up of any crucial files or databases that might be affected by a new installation. Always have a backup plan when a new installation is about to be performed. Installing with an Online Control Panel Most control panel installations can be completed in a few steps. Be sure to write down or otherwise make a note of any file, folder paths, or other crucial information as you go. We will walk through a control panel installation using cPanel/Fantastico. If you have never used cPanel before, this is a great opportunity to get your feet wet. Your ISP should have provided you with a link to your cPanel when you first setup your service. You will need a user name and password provided by your ISP to log in to cPanel. Once you are logged in you will see a screen with icons for different online tools. Log into your cPanel control panel. Select Fantastico (double mouse-click). The Fantastico icon is usually located at the bottom right corner of the screen. Scroll down the Fantastico screen until the Project Management category appears. Left mouse-click on dotProject. There will be a short description about dotProject. Make a note of the version of dotProject available. The latest stable installation should be listed. The version of dotProject is in parenthesis by the new installation link. We will be using version 2.0.4 in the examples. Click on the New Installation link to begin the installation process.Type in the name of the subfolder, where your dotProject installationshould be installed. If you leave it blank, then dotProject will be installed in the root folder of the URL path. For example, if I had left the folder field blank, the install tool would have placed the dotProject files directly in the public_html folder of www.leesjordan.net. I do not recommend leaving the folder field blank unless you already have a special URL set aside or are using a sub-domain.
Read more
  • 0
  • 0
  • 6379

article-image-installing-openstack-swift
Packt
04 Jun 2015
10 min read
Save for later

Installing OpenStack Swift

Packt
04 Jun 2015
10 min read
In this article by Amar Kapadia, Sreedhar Varma, and Kris Rajana, authors of the book OpenStack Object Storage (Swift) Essentials, we will see how IT administrators can install OpenStack Swift. The version discussed here is the Juno release of OpenStack. Installation of Swift has several steps and requires careful planning before beginning the process. A simple installation consists of installing all Swift components on a single node, and a complex installation consists of installing Swift on several proxy server nodes and storage server nodes. The number of storage nodes can be in the order of thousands across multiple zones and regions. Depending on your installation, you need to decide on the number of proxy server nodes and storage server nodes that you will configure. This article demonstrates a manual installation process; advanced users may want to use utilities such as Puppet or Chef to simplify the process. This article walks you through an OpenStack Swift cluster installation that contains one proxy server and five storage servers. (For more resources related to this topic, see here.) Hardware planning This section describes the various hardware components involved in the setup. Since Swift deals with object storage, disks are going to be a major part of hardware planning. The size and number of disks required should be calculated based on your requirements. Networking is also an important component, where factors such as a public or private network and a separate network for communication between storage servers need to be planned. Network throughput of at least 1 GB per second is suggested, while 10 GB per second is recommended. The servers we set up as proxy and storage servers are dual quad-core servers with 12 GB of RAM. In our setup, we have a total of 15 x 2 TB disks for Swift storage; this gives us a total size of 30 TB. However, with in-built replication (with a default replica count of 3), Swift maintains three copies of the same data. Therefore, the effective capacity for storing files and objects is approximately 10 TB, taking filesystem overhead into consideration. This is further reduced due to less than 100 percent utilization. The following figure depicts the nodes of our Swift cluster configuration: The storage servers have container, object, and account services running in them. Server setup and network configuration All the servers are installed with the Ubuntu server operating system (64-bit LTS version 14.04). You'll need to configure three networks, which are as follows: Public network: The proxy server connects to this network. This network provides public access to the API endpoints within the proxy server. Storage network: This is a private network and it is not accessible to the outside world. All the storage servers and the proxy server will connect to this network. Communication between the proxy server and the storage servers and communication between the storage servers take place within this network. In our configuration, the IP addresses assigned in this network are 172.168.10.0 and 172.168.10.99. Replication network: This is also a private network that is not accessible to the outside world. It is dedicated to replication traffic, and only storage servers connect to it. All replication-related communication between storage servers takes place within this network. In our configuration, the IP addresses assigned in this network are 172.168.9.0 and 172.168.9.99. This network is optional, and if it is set up, the traffic on it needs to be monitored closely. Pre-installation steps In order for various servers to communicate easily, edit the /etc/hosts file and add the host names of each server in it. This has to be done on all the nodes. The following screenshot shows an example of the contents of the /etc/hosts file of the proxy server node: Install the Network Time Protocol (NTP) service on the proxy server node and storage server nodes. This helps all the nodes to synchronize their services effectively without any clock delays. The pre-installation steps to be performed are as follows: Run the following command to install the NTP service: # apt-get install ntp Configure the proxy server node to be the reference server for the storage server nodes to set their time from the proxy server node. Make sure that the following line is present in /etc/ntp.conf for NTP configuration in the proxy server node: server ntp.ubuntu.com For NTP configuration in the storage server nodes, add the following line to /etc/ntp.conf. Comment out the remaining lines with server addresses such as 0.ubuntu.pool.ntp.org, 1.ubuntu.pool.ntp.org, 2.ubuntu.pool.ntp.org, and 3.ubuntu.pool.ntp.org: # server 0.ubuntu.pool.ntp.org# server 1.ubuntu.pool.ntp.org# server 2.ubuntu.pool.ntp.org# server 3.ubuntu.pool.ntp.orgserver s-swift-proxy Restart the NTP service on each server with the following command: # service ntp restart Downloading and installing Swift The Ubuntu Cloud Archive is a special repository that provides users with the ability to install new releases of OpenStack. The steps required to download and install Swift are as follows: Enable the capability to install new releases of OpenStack, and install the latest version of Swift on each node using the following commands. The second command shown here creates a file named cloudarchive-juno.list in /etc/apt/sources.list.d, whose content is "deb http://ubuntu-cloud.archieve.canonical.com/ubuntu": Now, update the OS using the following command: # apt-get update && apt-get dist-upgrade On all the Swift nodes, we will install the prerequisite software and services using this command: # apt-get install swift rsync memcached python-netifaces python-xattr python-memcache Next, we create a Swift folder under /etc and give users the permission to access this folder, using the following commands: # mkdir –p /etc/swift/# chown –R swift:swift /etc/swift Download the /etc/swift/swift.conf file from GitHub using this command: # curl –o /etc/swift/swift.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample Modify the /etc/swift/swift.conf file and add a variable called swift_hash_path_suffix in the swift-hash section. We then create a unique hash string using # python –c "from uuid import uuid4; print uuid4()" or # openssl rand –hex 10, and assign it to this variable, as shown in the following configuration option: We then add another variable called swift_hash_path_prefix to the swift-hash section, and assign to it another hash string created using the method described in the preceding step. These strings will be used in the hashing process to determine the mappings in the ring. The swift.conf file should be identical on all the nodes in the cluster. Setting up storage server nodes This section explains additional steps to set up the storage server nodes, which will contain the object, container, and account services. Installing services The first step required to set up the storage server node is installing services. Let's look at the steps involved: On each storage server node, install the packages for swift-account services, swift-container services, swift-object services, and xfsprogs (XFS Filesystem) using this command: # apt-get install swift-account swift-container swift-object xfsprogs Download the account-server.conf, container-server.conf, and object-server.conf samples from GitHub, using the following commands: # curl –o /etc/swift/account-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample# curl –o /etc/swift/container-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample# curl –o /etc/swift/object-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample Edit the /etc/swift/account-server.conf file with the following section: Edit the /etc/swift/container-server.conf file with this section: Edit the /etc/swift/object-server.conf file with the following section: Formatting and mounting hard disks On each storage server node, we need to identify the hard disks that will be used to store the data. We will then format the hard disks and mount them on a directory, which Swift will then use to store data. We will not create any RAID levels or subpartitions on these hard disks because they are not necessary for Swift. They will be used as entire disks. The operating system will be installed on separate disks, which will be RAID configured. First, identify the hard disks that are going to be used for storage and format them. In our storage server, we have identified sdb, sdc, and sdd to be used for storage. We will perform the following operations on sdb. These four steps should be repeated for sdc and sdd as well: Carry out the partitioning for sdb and create the filesystem using this command: # fdisk /dev/sdb# mkfs.xfs /dev/sdb1 Then let's create a directory in /srv/node/sdb1 that will be used to mount the filesystem. Give the permission to the swift user to access this directory. These operations can be performed using the following commands: # mkdir –p /srv/node/sdb1# chown –R swift:swift /srv/node/sdb1 We set up an entry in fstab for the sdb1 partition in the sdb hard disk, as follows. This will automatically mount sdb1 on /srv/node/sdb1 upon every boot. Add the following command line to the /etc/fstab file: /dev/sdb1 /srv/node/sdb1 xfsnoatime,nodiratime,nobarrier,logbufs=8 0 2 Mount sdb1 on /srv/node/sdb1 using the following command: # mount /srv/node/sdb1 RSYNC and RSYNCD In order for Swift to perform the replication of data, we need to configure rsync by configuring rsyncd.conf. This is done by performing the following steps: Create the rsyncd.conf file in the /etc folder with the following content: # vi /etc/rsyncd.conf We are setting up synchronization within the network by including the following lines in the configuration file: 172.168.9.52 is the IP address that is on the replication network for this storage server. Use the appropriate replication network IP addresses for the corresponding storage servers. We then have to edit the /etc/default/rsync file and set RSYNC_ENABLE to true using the following configuration option: RSYNC_ENABLE=true Next, we restart the rsync service using this command: # service rsync restart Then we create the swift, recon, and cache directories using the following commands, and then set its permissions: # mkdir -p /var/cache/swift# mkdir -p /var/swift/recon Setting permissions is done using these commands: # chown -R swift:swift /var/cache/swift# chown -R swift:swift /var/swift/recon Repeat these steps on every storage server. Setting up the proxy server node This section explains the steps required to set up the proxy server node, which are as follows: Install the following services only on the proxy server node: # apt-get install python-swiftclient python-keystoneclientpython-keystonemiddleware swift-proxy Swift doesn't support HTTPS. OpenSSL has already been installed as part of the operating system installation to support HTTPS. We are going to use the OpenStack Keystone service for authentication. In order to set up the proxy-server.conf file for this, we download the configuration file from the following link and edit it: https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample# vi /etc/swift/proxy-server.conf The proxy-server.conf file should be edited to get the correct auth_host, admin_token, admin_tenant_name, admin_user, and admin_password values: admin_token = 01d8b673-9ebb-41d2-968a-d2a85daa1324admin_tenant_name = adminadmin_user = adminadmin_password = changeme Next, we create a keystone-signing directory and give permissions to the swift user using the following commands: # mkdir -p /home/swift/keystone-signing# mkdir -R swift:swift /home/swift/keystone-signing Summary In this article, you learned how to install and set up the OpenStack Swift service to provide object storage, and install and set up the Keystone service to provide authentication for users to access the Swift object storage. Resources for Article: Further resources on this subject: Troubleshooting in OpenStack Cloud Computing [Article] Using OpenStack Swift [Article] Playing with Swift [Article]
Read more
  • 0
  • 0
  • 6378

article-image-automated-testing-using-robotium
Packt
19 Nov 2013
10 min read
Save for later

Automated testing using Robotium

Packt
19 Nov 2013
10 min read
(For more resources related to this topic, see here.) Robotium framework Robotium is an open source automation testing framework that is used to write a robust and powerful black box for Android applications (the emphasis is mostly on black box test cases). It fully supports testing for native and hybrid applications. Native apps are live on the device, that is, designed for a specific platform and can be installed from the Google Play Store, whereas Hybrid apps are partly native and partly web apps. These can also be installed from the app store, but require the HTML to be rendered in the browser. Robotium is mostly used to automate UI test cases and internally uses run-time binding to Graphical User Interface (GUI) components. Robotium is released under the Apache License 2.0. It is free to download and can be easily used by individuals and enterprises and is built on Java and JUnit 3. It will be more appropriate to call Robotium an extension of the Android Test Unit Framework, available at http://developer.android.com/tools/testing/testing_android.html. Robotium can also work without the application, under the test's source code. The test cases written using Robotium can either be executed on the Android Emulator (Android Virtual Device (AVD))—we will see how to create an AVD during installation in the following section—or on a real Android device. Developers can write function, system, and acceptance test scenarios across multiple activities. It is currently the world's leading Automation Testing Framework, and many open source developers are contributing to introduce more and more exciting features in subsequent releases. The following screenshot is of the git repository website for the Robotium project: As Robotium is an open source project, anyone can contribute for the purpose of development and help in enhancing the framework with many more features. The Robotium source code is maintained at GitHub and can be accessed using the following link: https://github.com/jayway/robotium You just need to fork the project. Make all your changes in a clone project and click on Pull Request on your repository to tell core team members which changes to bring in. If you are new to the git environment, you can refer to the GitHub tutorial at the following link: https://help.github.com/ Robotium is like Selenium but for Android. This project was started in January 2010 by Renas Reda. He is the founder and main developer for Robotium. The project initiated with v1.0 and continues to be followed up with new releases due to new requirements. It has support for Android features such as activities, toasts, menus, context menus, web views, and remote controls. Let's see most of the Robotium features and benefits for Android test case developers. Features and benefits Automated testing using Robotium has many features and benefits. The triangularization workflow diagram between the user, Robotium, and the Android device clearly explains use cases between them: The features and benefits of Robotium are as follows: Robotium helps us to quickly write powerful test cases with minimal knowledge of the application under test. Robotium offers APIs to directly interact with UI controls within the Android application such as EditText, TextView, and Button. Robotium officially supports Android 1.6 and above versions. The Android platform is not modified by Robotium. The Robotium test can also be executed using command prompt. Robotium can be integrated smoothly with Maven or Ant. This helps to add Robotium to your project's build automation process. Screenshots can be captured in Robotium (an example screenshot is shown as follows): The test application project and the application project run on the same JVM, that is, Dalvik Virtual Machine (DVM). It's possible to run Robotium without a source code. Robotium can work with other code coverage measurement tools, such as Cobertura and Emma. Robotium can detect the messages that are shown on the screen (Toasts). Robotium supports Android features such as activities, menu, and context menu. Robotium automated tests can be implemented quickly. Robotium is built on JUnit, because of which it inherits all JUnit's features. The Robotium framework automatically handles multiple activities in an Android application. Robotium test cases are prominently readable, in comparison to standard instrumentation tests. Scrolling activity is automatically handled by the Robotium framework. Recent versions of Robotium support hybrid applications. Hybrid applications use WebViews to present the HTML and JavaScript files in full screen, using the native browser rendering engine. API set Web support has been added to the Robotium framework since Robotium 4.0 released. Robotium has full support for hybrid applications. There are some key differences between native and hybrid applications. Let's go through them one by one, as follows: Native Application Hybrid Application Platform dependent Cross platform Run on the device's internal software and hardware Built using HTML5 and JavaScript and wrapped inside a thin native container that provides access to native platform features Need more developers to build apps on different platforms and learning time is more Save development cost and time Excellent performance Less performance The native and hybrid applications are shown as follows: Let's see some of the existing methods in Robotium that support access to web content. They are as follows: searchText (String text) scrollUp/Down () clickOnText (String text) takeScreenshot () waitForText (String text) In the methods specifically added for web support, the class By is used as a parameter. It is an abstract class used as a conjunction with the web methods. These methods are used to select different WebElements by their properties, such as ID and name. The element used in a web view is referred to as a WebElement. It is similar to the WebDriver implemented in Selenium. The following table lists all the methods inside the class By: Method Description className (String className) Select a WebElement by its class name cssSelector (String selectors) Select a WebElement by its CSS selector getValue () Return the value id (String id) Select a WebElement by its id name (String name) Select a WebElement by its name tagName (String tagName) Select a WebElement by its tag name textContent (String textContent) Select a WebElement by its text content xpath (String xpath) Select a WebElement by its xpath Some of the important methods in the Robotium framework, that aim at direct communication with web content in Android applications, are listed as follows: clickOnWebElement(By by): It clicks on the WebElement matching the specified By class object. waitForWebElement(By by): It waits for the WebElement matching the specified By class object. getWebElement(By by, int index): It returns a WebElement matching the specified By class object and index. enterTextInWebElement(By by, String text): It enters the text in a WebElement matching the specified By class object. typeTextInWebElement(By by): It types the text in a WebElement matching the specified By class object. In this method, the program actually types the text letter by letter using the keyboard, whereas enterTextInWebElement directly enters the text in the particular. clearTextInWebElement(By by): It clears the text in a WebElement matching the specified By class object. getCurrentWebElements(By by): It returns the ArrayList of WebElements displayed in the active web view matching the specified By class object. Before actually looking into the hybrid test example, let's gain more information about WebViews. You can get an instance of WebView using the Solo class as follows: WebView wb = solo.getCurrentViews(WebView.class).get(0); Now that you have control of WebView, you can inject your JavaScript code as follows: Wb.loadUrl("<JavaScript>"); This is very powerful, as we can call every function on the current page; thus, it helps automation. Robotium Remote Control using SAFS SAFS tests are not wrapped up as JUnit tests and the SAFS Remote Control of Robotium uses an implementation that is NOT JUnit based. Also, there is no technical requirement for a JUnit on the Remote-Control side of the test. The test setup and deployment of the automation of the target app can be achieved using the SDK tools. These tools are used as part of the test runtime such as adb and aapt. The existing packaging tools can be used to repackage a compiled Robotium test with an alternate AndroidManifest.xml file, which can change the target application at runtime. SAFS is a general-purpose, data-driven framework. The only thing that should be provided by the user is the target package name or APK path arguments. The test will extract and redeploy the modified packages automatically and then launch the actual test. Traditional JUnit/Robotium users might not have, or see the need for, this general-purpose nature, but that is likely because it was necessary for the previous Android tests to be JUnit tests. It is required for the test to target one specific application. The Remote Control application is application specific. That's why the test app with the Remote Control installed in the device no longer needs to be an application. The Remote Control in Robotium means there are two test applications to build for any given test. They are as follows: Traditional on-device Robotium/JUnit test app Remote Control app These two build projects have entirely different dependencies and build scripts. The on-device test app has the traditional Robotium/Android/JUnit dependencies and build scripts, while the Remote Control app only has dependencies on the TCP sockets for communications and Robotium Remote Control API. The implementation for the remote-controlled Robotium can be done in the following two pieces: On Device: ActivityInstrumentationTestCase2.setup() is initialized when Robotium's Solo class object is to be used for the RobotiumTestRunner (RTR). The RTR has a Remote Control Listener and routes remote control calls and data to the appropriate Solo class methods and returns any results, as needed, to the Remote Controller. The on-device implementation may exploit test-result asserts if that is desirable. Remote Controller: The RemoteSolo API duplicates the traditional Solo API, but its implementation largely pushes the data through the Remote Control to the RTR, and then receives results from the Remote Controller. The Remote Control implementation may exploit any number of options for asserting, handling, or otherwise reporting or tracking the test results for each call. As you can see, the Remote-Control side only requires a RemoteSolo API without any specific JUnit context. It can be wrapped in a JUnit context if the tester desires it, but it is not necessary to be in a JUnit context. The sample code and installation of Robotium Remote Control can be accessed in the following link: http://code.google.com/p/robotium/wiki/RemoteControl Summary Thus this article introduced us to the Robotium framework, its different features, its benefits in the world of automated testing, the API set of the Robotium Framework, and how to implement the Robotium Remote Control using SAFS. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Android Native Application API [Article] Introducing an Android platform [Article]
Read more
  • 0
  • 0
  • 6363

article-image-lighting-camera-effects-unity-2018
Amarabha Banerjee
04 May 2018
12 min read
Save for later

Implementing lighting & camera effects in Unity 2018

Amarabha Banerjee
04 May 2018
12 min read
Today, we will explore lighting & camera effects in Unity 2018. We will start with cameras to include perspectives, frustums, and Skyboxes. Next, we will learn a few uses of multiple cameras to include mini-maps. We will also cover the different types of lighting, explore reflection probes, and conclude with a look at shadows. Working with cameras Cameras render scenes so that the user can view them. Think about the hidden complexity of this statement. Our games are 3D, but people playing our games view them on 2D displays such as television, computer monitors, or mobile devices. Fortunately for us, Unity makes implementing cameras easy work. Cameras are GameObjects and can be edited using transform tools in the Scene view as well as in the Inspector panel. Every scene must have at least one camera. In fact, when a new scene is created, Unity creates a camera named Main Camera. As you will see later in this chapter, a scene can have multiple cameras. In the Scene view, cameras are indicated with a white camera silhouette, as shown in the following screenshot: When we click our Main Camera in the Hierarchy panel, we are provided with a Camera Preview in the Scene view. This gives us a preview of what the camera sees as if it were in game mode. We also have access to several parameters via the Inspector panel. The Camera component in the Inspector panel is shown here: Let's look at each of these parameters with relation to our Cucumber Beetle game: The Clear Flags parameter lets you switch between Skybox, Solid Color, Depth Only, and Don't Clear. The selection here informs Unity which parts of the screen to clear. We will leave this setting as Skybox. You will learn more about Skyboxes later in this chapter. The Background parameter is used to set the default background fill (color) of your game world. This will only be visible after all game objects have been rendered and if there is no Skybox. Our Cucumber Beetle game will have a Skybox, so this parameter can be left with the default color. The Culling Mask parameter allows you to select and deselect the layers you want the camera to render. The default selection options are Nothing, Everything, Default, TransparentFX, Ignore Raycast, Water, and UI. For our game, we will select Everything. If you are not sure which layer a game object is associated with, select it and look at the Layer parameter in the top section of the Inspector panel. There you will see the assigned layer. You can easily change the layer as well as create your own unique layers. This gives you finite rendering control. The Projection parameter allows you to select which projection, perspective or orthographic, you want for your camera. We will cover both of those projections later in this chapter. When perspective projection is selected, we are given access to the Field of View parameter. This is for the width of the camera's angle of view. The value range is 1-179°. You can use the slider to change the values and see the results in the Camera Preview window. When orthographic projection is selected, an additional Size parameter is available. This refers to the viewport size. For our game, we will select perspective projection with the Field of View set to 60. The Clipping Planes parameters include Near and Far. These settings set the closest and furthest points, relative to the camera, that rendering will happen at. For now, we will leave the default settings of 0.3 and 1000 for the Near and Far parameters, respectively. The Viewport Rect parameter has four components – X, Y, W, and H – that determine where the camera will be drawn on the screen. As you would expect, the X and Y components refer to horizontal and vertical positions, and the W and H components refer to width and height. You can experiment with these values and see the changes in the Camera Preview. For our game, we will leave the default settings. The Depth parameter is used when we implement more than one camera. We can set a value here to determine the camera's priority in relation to others. Larger values indicate a higher priority. The default setting is sufficient for our game. The Rendering Path parameter defines what rendering methods our camera will use. The options are Use Graphics Settings, Forward, Deferred, Legacy Vertex Lit, and Legacy Deferred (light prepass). We will use the Use Graphics Settings option for our game, which also uses the default setting. The Target Texture parameter is not something we will use in our game. When a render texture is set, the camera is not able to render to the screen. The Occlusion Culling parameter is a powerful setting. If enabled, Unity will not render objects that are occluded, or not seen by the camera. An example would be objects inside a building. If the camera can currently only see the external walls of the building, then none of the objects inside those walls can be seen. So, it makes sense to not render those. We only want to render what is absolutely necessary to help ensure our game has smooth gameplay and no lag. We will leave this as enabled for our game. The Allow HDR parameter is a checkbox that toggles a camera's High Dynamic Range (HDR) rendering. We will leave the default setting of enabled for our game. The Allow MSAA parameter is a toggle that determines whether our camera will use a Multisample Anti-Aliasing (MSAA) render target. MSAA is a computer graphics optimization technique and we want this enabled for our game. Understanding camera projections There are two camera projections used in Unity: perspective and orthographic. With perspective projection, the camera renders a scene based on the camera angle, as it exists in the scene. Using this projection, the further away an object is from the camera, the smaller it will be displayed. This mimics how we see things in the real world. Because of the desire to produce realistic games, or games that approximate the realworld, perspective projection is the most commonly used in modern games. It is also what we will use in our Cucumber Beetle game. The other projection is orthographic. An orthographic perspective camera renders a scene uniformly without any perspective. This means that objects further away will not be displayed smaller than objects closer to the camera. This type of camera is commonly used for top-down games and is the default camera projection used in 2D and Unity's UI system. We will use perspective projection for our Cucumber Beetle game. Orientating your frustum When a camera is selected in the Hierarchy view, its frustum is visible in the Scene view. A frustum is a geometric shape that looks like a pyramid that has had its top cut off, as illustrated here: The near, or top, plane is parallel to its base. The base is also referred to as the far plane. The frustum's shape represents the viable region of your game. Only objects in that region are rendered. Using the camera object in Scene view, we can change our camera's frustum shape. Creating a Skybox When we create game worlds, we typically create the ground, buildings, characters, trees, and other game objects. What about the sky? By default, there will be a textured blue sky in your Unity game projects. That sky is sufficient for testing but does not add to an immersive gaming experience. We want a bit more realism, such as clouds, and that can be accomplished by creating a Skybox. A Skybox is a six-sided cube visible to the player beyond all other objects. So, when a player looks beyond your objects, what they see is your Skybox. As we said, Skyboxes are six-sided cubes, which means you will need six separate images that can essentially be clamped to each other to form the cube. The following screenshot shows the Default Skybox that Unity projects start with as well as the completed Custom Skybox you will create in this section: Perform the following steps to create a Skybox: In the Project panel, create a Skybox subfolder in the Assets folder. We will use this folder to store our textures and materials for the Skybox. Drag the provided six Skybox images, or your own, into the new Skybox folder. Ensure the Skybox folder is selected in the Project panel. From the top menu, select Assets | Create | Material. In the Project panel, name the material Skybox. With the Skybox material selected, turn your attention to the Inspector panel. Select the Shader drop-down menu and select SkyBox | 6 Sided. Use the Select button for each of the six images and navigate to the images you added in step 2. Be sure to match the appropriate texture to the appropriate cube face. For example, the SkyBox_Front texture matches the Front[+Z] cube face on the Skybox Material. In order to assign our new Skybox to our scene, select Window | Lighting | Settings from the top menu. This will bring up the Lighting settings dialog window. In the Lighting settings dialog window, click on the small circle to the right of the Skybox Material input field. Then, close the selection window and the Lighting window. Refer to the following screenshot: You will now be able to see your Skybox in the Scene view. When you click on the Camera in the Hierarchy panel, you will also see the Skybox as it will appear from the camera's perspective. Be sure to save your scene and your project. Using multiple cameras Our Unity games must have a least one camera, but we are not limited to using just one. As you will see we will attach our main camera, or primary camera, to our player character. It will be as if the camera is following the character around the game environment. This will become the eyes of our character. We will play the game through our character's view. A common use of a second camera is to create a mini-map that can be seen in a small window on top of the game display. These mini-maps can be made to toggle on and off or be permanent/fixed display components. Implementations might consist of a fog-of-war display, a radar showing enemies, or a global top-down view of the map for orientation purposes. You are only limited by your imagination. Another use of multiple cameras is to provide the player with the ability to switch between third-person and first-person views. You will remember that the first-person view puts the player's arms in view, while in the third-person view, the player's entire body is visible. We can use two cameras in the appropriate positions to support viewing from either camera. In a game, you might make this a toggle—say, with the C keyboard key—that switches from one camera to the other. Depending on what is happening in the game, the player might enjoy this ability. Some single-player games feature multiple playable characters. Giving the player the ability to switch between these characters gives them greater control over the game strategy. To achieve this, we would need to have cameras attached to each playable character and then give the player the ability to swap characters. We would do this through scripting. This is a pretty advanced implementation of multiple characters. Another use of multiple cameras is adding specialty views in a game. These specialty views might include looking through a door's peep-hole, looking through binoculars at the top of a skyscraper, or even looking through a periscope. We can attach cameras to objects and change their viewing parameters to create unique camera use in our games. We are only limited by our own game designs and imagination. We can also use cameras as cameras. That's right! We can use the camera game object to simulate actual in-game cameras. One example is implementing security cameras in a prison-escape game. Working with lighting In the previous sections, we explored the uses of cameras for Unity games. Just like in the real world, cameras need lights to show us objects. In Unity games, we use multiple lights to illuminate the game environment. In Unity, we have both dynamic lighting techniques as well as light baking options for better performance. We can add numerous light sources throughout our scenes and selectively enable or disable an object's ability to cast or receive shadows. This level of specificity gives us tremendous opportunity to create realistic game scenes. Perhaps the secret behind Unity's ability to so realistically render light and shadows is that Unity models the actual behavior of lights and shadows. Real-time global illumination gives us the ability to instantiate multiple lights in each scene, each with the ability to directly or indirectly impact objects in the scene that are within range of the light sources. We can also add and manipulate ambient light in our game scenes. This is often done with Skyboxes, a tri-colored gradient, or even a single color. Each new scene in Unity has default ambient lighting, which we can control by editing the values in the Lighting window. In that window, you have access to the following settings: Environment Real-time Lighting Mixed Lighting Lightmapping Settings Other Settings Debug Settings No changes to these are required for our game at this time. We have already set the environmental lighting to our Skybox. When we create our scenes in Unity, we have three options for lighting. We can use real-time dynamic light, use the baked lighting approach, or use a mixture of the two. Our games perform more efficiently with baked lighting, compared to real-time dynamic lighting, so if performance is a concern, try using baked lighting where you can. To summarize, we have discussed how to create interesting lighting and camera effects using Unity 2018. This article is an extract from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. This book will help you create fun filled real world games with Unity 2018. Game Engine Wars: Unity vs Unreal Engine Unity plugins for augmented reality application development How to create non-player Characters (NPC) with Unity 2018    
Read more
  • 0
  • 0
  • 6355

article-image-initial-configuration-sco-2016
Packt
17 Jul 2017
13 min read
Save for later

Initial Configuration of SCO 2016

Packt
17 Jul 2017
13 min read
In this article by Michael Seidl, author of the book Microsoft System Center 2016 Orchestrator Cookbook - Second Edition, will show you how to setup Orchestrator Environment and how to deploy and configure Orchestrator Integration Packs. (For more resources related to this topic, see here.) Deploying an additional Runbook designer Runbook designer is the key feature to build your Runbooks. After the initial installation, Runbook designer is installed on the server. For your daily work with orchestrator and Runbooks, you would like to install the Runbook designer on your client or on admin server. We will go through these steps in this recipe. Getting ready You must review the planning the Orchestrator deployment recipe before performing the steps in this recipe. There are a number of dependencies in the planning recipe you must perform in order to successfully complete the tasks in this recipe. You must install a management server before you can install the additional Runbook Designers. The user account performing the installation has administrative privileges on the server nominated for the SCO deployment and must also be a member of OrchestratorUsersGroup or equivalent rights. The example deployment in this recipe is based on the following configuration details: Management server called TLSCO01 with a remote database is already installed System Center 2016 Orchestrator How to do it... The Runbook designer is used to build Runbooks using standard activities and or integration pack activities. The designer can be installed on either a server class operating system or a client class operating system. Follow these steps to deploy an additional Runbook Designer using the deployment manager: Install a supported operating system and join the active directory domain in scope of the SCO deployment. In this recipe the operating system is Windows 10. Ensure you configure the allowed ports and services if the local firewall is enabled for the domain profile. See the following link for details: https://technet.microsoft.com/en-us/library/hh420382(v=sc.12).aspx. Log in to the SCO Management server with a user account with SCO administrative rights. Launch System Center 2016 Orchestrator Deployment Manager: Right-click on Runbook designers, and select Deploy new Runbook Designer: Click on Next on the welcome page. Type the computer name in the Computer field and click on Add. Click on Next. On the Deploy Integration Packs or Hotfixes page check all the integration packs required by the user of the Runbook designer (for this example we will select the AD IP). Click on Next. Click on Finish to begin the installation using the Deployment Manager. How it works... The Deployment Manager is a great option for scaling out your Runbook Servers and also for distributing the Runbook Designer without the need for the installation media. In both cases the Deployment Manager connects to the Management Server and the database server to configure the necessary settings. On the target system the deployment manager installs the required binaries and optionally deploys the integration packs selected. Using the Deployment Manager provides a consistent and coordinated approach to scaling out the components of a SCO deployment. See also The following official web link is a great source of the most up to date information on SCO: https://docs.microsoft.com/en-us/system-center/orchestrator/ Registering an SCO Integration Pack Microsoft System Center 2016 Orchestrator (SCO) automation is driven by process automation components. These process automation components are similar in concept to a physical toolbox. In a toolbox you typically have different types of tools which enable you to build what you desire. In the context of SCO these tools are known as Activities. Activities fall into two main categories: Built-in Standard Activities: These are the default activity categories available to you in the Runbook Designer. The standard activities on their own provide you with a set of components to create very powerful Runbooks. Integration Pack Activities: Integration Pack Activities are provided either by Microsoft, the community, solution integration organizations, or are custom created by using the Orchestrator Integration Pack Toolkit. These activities provide you with the Runbook components to interface with the target environment of the IP. For example, the Active Directory IP has the activities you can perform in the target Active Directory environment. This recipe provides the steps to find and register the second type of activities into your default implementation of SCO. Getting ready You must download the Integration Pack(s) you plan to deploy from the provider of the IP. In this example we will be deploying the Active Directory IP, which can be found at the following link: https://www.microsoft.com/en-us/download/details.aspx?id=54098. You must have deployed a System Center 2016 Orchestrator environment and have full administrative rights in the environment. How to do it... The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: We will deploy the Microsoft Active Directory (AD) integration pack (IP). Integration pack organization A good practice is to create a folder structure for your integration packs. The folders should reflect versions of the IPs for logical grouping and management. The version of the IP will be visible in the console and as such you must perform this step after you have performed the step to load the IP(s). This approach will aid in change management when updating IPs in multiple environments. Follow these steps to deploy the Active Directory integration pack. Identify the source location for the Integration Pack in scope (for example, the AD IP for SCO2016). Download the IP to a local directory on the Management Server or UNC share. Log in to the SCO Management server. Launch the Deployment Manager: Under Orchestrator Management Server, right-click on Integration Packs. Select Register IP with the Orchestrator Management Server: Click on Next on the welcome page. Click on Add on the Select Integration Packs or Hotfixes page. Navigate to the directory where the target IP is located, click on Open, and then click on Next. Click on Finish . Click on Accept on End-User License Agreement to complete the registration. Click on Refresh to validate if the IP has successfully been registered. How it works... The process of loading an integration pack is simple. The prerequisite for successfully registering the IP (loading) is ensuring you have downloaded a supported IP to a location accessible to the SCO management server. Additionally the person performing the registration must be a SCO administrator. At this point we have registered the Integration Pack to our Deployment Wizard, 2 Steps are still necessary before we can use the Integration Pack, see our following Recipe for this. There's more... Registering the IP is the first part of the process of making the IP activities available to Runbook designers and Runbook Servers. The next Step has to be the Deployment of Integration Packs to Runbook Designer. See the next Recipe for that. Orchestrator Integration Packs are provided not only by Microsoft, also third party Companies like Cisco or NetAPP are providing OIP’s for their Products. Additionally there is a huge Community which are providing Orchestrator Integration Packs. There are several Sources of downloading Integration Packs, here are two useful links: http://www.techguy.at/liste-mit-integration-packs-fuer-system-center-orchestrator/ http://scorch.codeplex.com/ https://www.microsoft.com/en-us/download/details.aspx?id=54098 Deploying the IP to designers and Runbook servers Registering the Orchestrator Integration Pack is only the first step, you also need to deploy the OIP to your Designer or Runbook Server. Getting Ready You have to follow the steps described in Recipe Registering an SCO Integration Pack before you can start with the next steps to deploy an OIP. How to do it In our example we will deploy the Active Direcgtory Integration Pack to our Runbooks Desginer. Follow these steps to deploy the Active Directory integration pack. Once the IP in scope (AD IP in our example) has successfully been registered, follow these steps to deploy it to the Runbook Designers and Runbook Servers. Log in to the SCO Management server and launch Deployment Manager: Under Orchestrator Management Server, right-click on the Integration Pack in scope and select Deploy IP to Runbook Server or Runbook Designer: Click on Next on the welcome page, select the IP you would like to deploy (in our example, System Center Integration Pack for Active Directory ,  and then click on Next. On the computer Selection page. Type the name of the Runbook Server or designer  in scope and click on Add (repeat for all servers in the scope).  On the Installation Options page you have the following three options: Schedule the Installation: select this option if you want to schedule the deployment for a specific time. You still have to select one of the next two options. Stop all running Runbooks before installing the Integration Packs  or Hotfixes: This option will as described stop all current Runbooks in the environment. Install the Integration Packs or Hotfixes without stopping the running Runbooks: This is the preferred option if you want to have a controlled deployment without impacting current jobs: Click on Next after making your installation option selection. Click on Finish The integration pack will be deployed to all selected designers and Runbook servers. You must close all Runbook designer consoles and re-launch to see the newly deployed Integration Pack. How it works… The process of deploying an integration pack is simple. The pre-requisite for successfully deploying the IP (loading) is ensuring you have registered a supported IP in the SCO management server. Now we have successfully deployed an Orchestrator Integration Pack. If you have deployed it to a Runbook designer, make sure you close and reopen the designer to be able to use the activities in this Integration Pack. Now your are able to use these activities to build your Runbooks, the only thing you have to do, is to follow our next recipe and configure this Integration Pack. This steps can be used for each single Integration Pack, also deploy multiple OIP with one deployment. There’s more… You have to deploy an OIP to every single Designer and Runbook Server, where you want to work with the Activities. Doesn’t matter if you want to edit a Runbook with the Designer or want to run a Runbook on a special Runbook Server, the OIP has to be deployed to both. With Orchestrator Deployment Manager, this is a easy task to do. Initial Integration Pack configuration This recipe provides the steps required to configure an integration pack for use once it has been successfully deployed to a Runbook designer. Getting ready You must deploy an Orchestrator environment and also deploy the IP you plan to configure to a Runbook designer before following the steps in this recipe. The authors assume the user account performing the installation has administrative privileges on the server nominated for the SCO Runbook designer. How to do it... Each integration pack serves as an interface to the actions SCO can perform in the target environment. In our example we will be focusing on the Active Directory connector. We will have two accounts under two categories of AD tasks in our scenario: IP name Category of actions Account name Active Directory Domain Account Management SCOAD_ACCMGT Active Directory Domain Administrator Management SCOAD_DOMADMIN The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: Follow these steps to complete the configuration of the Active Directory IP options in the Runbook Designer: Create or identify an existing account for the IP tasks. In our example we are using two accounts to represent two personas of a typical active directory delegation model. SCOAD_ACCMGT is an account with the rights to perform account management tasks only and SCOAD_DOMADMIN is a domain admin account for elevated tasks in Active Directory. Launch the Runbook Designer as a SCO administrator, select Options from the menu bar, and select the IP to configure (in our example, Active Directory). Click on Add, type AD Account Management in the Name: field, select Microsoft Active Directory Domain Configuration in the Type field by clicking on the. In the Properties section type the following: Configuration User Name: SCOAD_ACCMGT Configuration Password: Enter the password for SCOAD_ACCMGT Configuration Domain Controller Name (FQDN): The FQDN of an accessible domain controller in the target AD (In this example, TLDC01.TRUSTLAB.LOCAL). Configuration Default Parent Container: This is an optional field. Leave it blank: Click on OK. Repeat steps 3 and 4 for the Domain Admin account and click on Finish to complete the configuration. How it works... The IP configuration is unique for each system environment SCO interfaces with for the tasks in scope of automation. The active directory IP configuration grants SCO the rights to perform the actions specified in the Runbook using the activities of the IP. Typical Active Directory activities include, but are not limited to creating user and computer accounts, moving user and computer accounts into organizational units, or deleting user and computer accounts. In our example we created two connection account configurations for the following reasons: Follow the guidance of scoping automation to the rights of the manual processes. If we use the example of a Runbook for creating user accounts we do not need domain admin access. A service desk user performing the same action manually would typically be granted only account management rights in AD. We have more flexibility with delegating management and access to Runbooks. Runbooks with elevated rights through the connection configuration can be separated from Runbooks with lower rights using folder security. The configuration requires planning and understanding of its implication before implementing. Each IP has its own unique options which you must specify before you create Runbooks using the specified IP. The default IPs that you can download from Microsoft include the documentation on the properties you must set. There’s more… As you have seen in this recipe, we need to configure each additional Integration Pack with a Connections String, User and Password. The built in Activities from SCO, are using the Service Account rights to perform this Actions, or you can configure a different User for most of the built in Activities.  See also The official online documentation for Microsoft Integration Packs is updated regularly and should be a point for reference at https://www.microsoft.com/en-us/download/details.aspx?id=54098 The creating and maintaining a security model for Orchestrator in this article expands further on the delegation model in SCO. Summary In this article, we have covered the following: Deploying an Additional Runbook Designer Registering an SCO Integration Pack Deploying an SCO Integration Pack to Runbook Designer and Server Initial Integration Pack Configuration Resources for Article: Further resources on this subject: Deploying the Orchestrator Appliance [article] Unpacking System Center 2012 Orchestrator [article] Planning a Compliance Program in Microsoft System Center 2012 [article]
Read more
  • 0
  • 0
  • 6354
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-so-you-want-to-learn-artificial-intelligence-heres-how-you-do-it
Richard Gall
27 Feb 2019
8 min read
Save for later

So, you want to learn artificial intelligence. Here's how you do it.

Richard Gall
27 Feb 2019
8 min read
If you want to learn how to build artificial intelligence systems, the first step is simple: forget all about artificial intelligence. Instead focus your attention on machine learning. That way, you can be sure you’re in the domain of the practical rather than the domain of hype. Okay, this position might sound a little too dramatic. But there are a number of jokes doing the rounds on Twitter along these lines. Mat Velloso, an adviser to Satya Nadella at Microsoft, wrote late last year that “if it’s written in Python, it’s machine learning. If it’s written in PowerPoint, it’s probably AI.” https://twitter.com/matvelloso/status/1065778379612282885 There are similar jokes that focus on the use of the different words depending on whether you’re talking to investors or colleagues - either way, it’s clear that if you’re starting to explore artificial intelligence and machine learning, understanding what’s important and what you can ignore will help you to get a better view on where you need to go as your learning journey unfolds. So, once you understand that artificial intelligence is merely the word describing the end goal we’re trying to achieve, and machine learning is a means of achieving that goal, you can begin to start trying to develop intelligent systems yourself. Clearly, a question will keep cropping up: where next? Well, this post should go some way to helping you. Do you want to learn artificial intelligence? Read Packt's extensive Learning Path Python: Beginner's Guide to Artificial Intelligence. For a more advanced guide, check out Python: Advanced Guide to Artificial Intelligence. The basics of machine learning If you want to build artificial intelligence, you need to start by learning the basics of machine learning. Follow these steps: Get to grips with the basics of Python and core programming principles - if you’re reading this, you probably know enough to begin, but if you don’t there are plenty of resources to get you started. (We suggest you start with Learning Python) Make sure you understand basic statistical principles - machine learning is really just statistics, automated by code. Venturing further into machine learning and artificial intelligence The next step builds on those foundations. This is where you begin thinking about the sorts of problems you want to solve and the types of questions you want to ask. This is actually a creative step where you set the focus for your project - whatever kind of pattern or relationship you want to understand, this is where you can do just that. One of the difficulties, however, is making sure you have access to the data you need to actually do what you want. Sometimes, you might need to do some serious web scraping or data mining to get hold of the data you want - that’s beyond the scope of this piece, but there are plenty of resources out there to help you do just that. But there are also plenty of ready made data sets available for you to use in your machine learning project in whichever way you wish. You can find 50 data sets for machine learning here, all for a range of different uses. (If you’re trying machine learning for the first time, we’d suggest using one of these data sets and playing around to save you collecting data). Getting to grips with data modelling Although machine learning modelling is the next step in the learning journey, arguably it should happen at the same time as you’re thinking about both the questions you’re asking and the different data sources you might require. This is because the model - or models - you decide to employ in your project will follow directly from the problems you’re trying to tackle and, indeed, the nature and size of the data sets you eventually use. It’s important to note that no model is perfect. There’s a rule in the data science and machine learning world called the ‘no free lunch’ rule - basically, there’s no model that offers a short cut. There will always be trade offs between different algorithms in how they perform in various factors. To manage this issue you need to understand what’s important to you - maybe you’re not worried about speed, for example? Or perhaps accuracy isn’t crucial, you just want to build something that runs quickly. Broadly, the models you use will fall into these categories: supervised or unsupervised. Supervised machine learning algorithms Supervised learning is where you have an input and an output and you use an algorithm to better understand the relationship between the two. Ultimately, you want to get to a point when your machine learning system understands the relationship in such a way that you could predict an output. Supervised learning can also be broken down into regression or classification. Regression is where the output is a number or value, while classification is a specific category, or descriptor. Some algorithms can be used for both regression and classification problems, such as random forest, while others can be used for one or the other. For example, support vector machines can be used for classification problems, while linear regression algorithms can, as the name indicates, be used for regression problems. Unsupervised machine learning algorithms Unsupervised machine learning contrasts from supervised machine learning in that there are no outputs on which the algorithm works. If supervised learning 'tells' the algorithm the answers from which it then needs to understand how those answers were generated, unsupervised learning aims to understand the underlying structure within a given set of data. There aren’t any answers to guide the machine learning algorithm. As above, there are a couple of different approaches to unsupervised machine learning: clustering and association. Clustering helps you understand different groups within a set of data, while association is simply a way of understanding relationship or rules: if this happens, then this will happen too. Okay, so what about artificial intelligence? By now you will have a solid foundation of knowledge in machine learning. However, this is only the tip of the iceberg - machine learning at its most basic provides a very limited form of artificial intelligence. Advances in artificial intelligence are possible through ever more powerful algorithms - artificial or deep neural networks - that have additional layers of complexity (quite literally additional neurons). These are the algorithms that are used to power sophisticated applications and tools. From image recognition to image identification, through to speech to text and machine translation, the applications of these algorithms are radically transforming our relationship with technology. But you probably already knew that. The important question is how you actually go about doing it. Well, luckily in many ways, if you know the core components of machine learning, more advanced elements of deep learning and artificial neural networks shouldn’t actually be as complex as you might at first think. There are, however, a couple of considerations that become more important as you move deeper into deep learning. Hardware considerations for deep learning One of the most important considerations for any deep learning projects you want to try is the hardware you’re using. For a basic machine learning problem, this shouldn’t be an issue. However, but as the computations on which your deep learning system is working become more extensive, the hardware you use to run will become a challenge you need to resolve. This is too big an issue to explore here, but you can look in detail at our comparison of different processors here. Getting started with deep learning frameworks One of the reasons the whole world is talking about artificial intelligence is because it’s easier to do. And this is thanks, in part, to the growth of new deep learning frameworks that make it relatively straightforward to build complex deep learning models. The likes of TensorFlow, Keras, and PyTorch are all helping engineers and data scientists build deep learning models of considerable sophistication. Although they each have their own advantages, and it’s well worth spending some time comparing them, there’s certainly a lot to be said for simply getting started with them yourself. What about cloud's impact on machine learning and artificial intelligence? An interesting development in the machine learning space is the impact of cloud based solutions. The likes of Azure, AWS and Google Cloud Platform are all offering a number of different services and tools from within their overarching cloud products that make performing machine and deep learning tasks much easier. While this is undoubtedly going to be an important development, and, indeed, one you may have encountered already, there is no substitute for simply getting your hands dirty with the data and seeing how the core principles behind machine learning and artificial intelligence actually work. Conclusion: Don’t be scared, take machine learning and artificial intelligence one step at a time Clearly, with so much hype around artificial intelligence its easy to get stuck before you begin. However, by focusing on the core principles and practical application of machine learning you will be well on your way to helping drive the future of artificial intelligence. Learn artificial intelligence from scratch with Python: Beginner's Guide to Artificial Intelligence. Dive deeper into deep learning and artificial intelligence with Python: Advanced Guide to Artificial Intelligence.  
Read more
  • 0
  • 0
  • 6352

article-image-json-jsonnet
Packt
25 Jun 2015
16 min read
Save for later

JSON with JSON.Net

Packt
25 Jun 2015
16 min read
In this article by Ray Rischpater, author of the book JavaScript JSON Cookbook, we show you how you can use strong typing in your applications with JSON using C#, Java, and TypeScript. You'll find the following recipes: How to deserialize an object using Json.NET How to handle date and time objects using Json.NET How to deserialize an object using gson for Java How to use TypeScript with Node.js How to annotate simple types using TypeScript How to declare interfaces using TypeScript How to declare classes with interfaces using TypeScript Using json2ts to generate TypeScript interfaces from your JSON (For more resources related to this topic, see here.) While some say that strong types are for weak minds, the truth is that strong typing in programming languages can help you avoid whole classes of errors in which you mistakenly assume that an object of one type is really of a different type. Languages such as C# and Java provide strong types for exactly this reason. Fortunately, the JSON serializers for C# and Java support strong typing, which is especially handy once you've figured out your object representation and simply want to map JSON to instances of classes you've already defined. We use Json.NET for C# and gson for Java to convert from JSON to instances of classes you define in your application. Finally, we take a look at TypeScript, an extension of JavaScript that provides compile-time checking of types, compiling to plain JavaScript for use with Node.js and browsers. We'll look at how to install the TypeScript compiler for Node.js, how to use TypeScript to annotate types and interfaces, and how to use a web page by Timmy Kokke to automatically generate TypeScript interfaces from JSON objects. How to deserialize an object using Json.NET In this recipe, we show you how to use Newtonsoft's Json.NET to deserialize JSON to an object that's an instance of a class. We'll use Json.NET because although this works with the existing .NET JSON serializer, there are other things that I want you to know about Json.NET, which we'll discuss in the next two recipes. Getting ready To begin, you need to be sure you have a reference to Json.NET in your project. The easiest way to do this is to use NuGet; launch NuGet, search for Json.NET, and click on Install, as shown in the following screenshot: You'll also need a reference to the Newonsoft.Json namespace in any file that needs those classes with a using directive at the top of your file: usingNewtonsoft.Json; How to do it… Here's an example that provides the implementation of a simple class, converts a JSON string to an instance of that class, and then converts the instance back into JSON: using System; usingNewtonsoft.Json;   namespaceJSONExample {   public class Record {    public string call;    public double lat;    public double lng; } class Program {    static void Main(string[] args)      {        String json = @"{ 'call': 'kf6gpe-9',        'lat': 21.9749, 'lng': 159.3686 }";          var result = JsonConvert.DeserializeObject<Record>(          json, newJsonSerializerSettings            {        MissingMemberHandling = MissingMemberHandling.Error          });        Console.Write(JsonConvert.SerializeObject(result));          return;        } } } How it works… In order to deserialize the JSON in a type-safe manner, we need to have a class that has the same fields as our JSON. The Record class, defined in the first few lines does this, defining fields for call, lat, and lng. The Newtonsoft.Json namespace provides the JsonConvert class with static methods SerializeObject and DeserializeObject. DeserializeObject is a generic method, taking the type of the object that should be returned as a type argument, and as arguments the JSON to parse, and an optional argument indicating options for the JSON parsing. We pass the MissingMemberHandling property as a setting, indicating with the value of the enumeration Error that in the event that a field is missing, the parser should throw an exception. After parsing the class, we convert it again to JSON and write the resulting JSON to the console. There's more… If you skip passing the MissingMember option or pass Ignore (the default), you can have mismatches between field names in your JSON and your class, which probably isn't what you want for type-safe conversion. You can also pass the NullValueHandling field with a value of Include or Ignore. If Include, fields with null values are included; if Ignore, fields with Null values are ignored. See also The full documentation for Json.NET is at http://www.newtonsoft.com/json/help/html/Introduction.htm. Type-safe deserialization is also possible with JSON support using the .NET serializer; the syntax is similar. For an example, see the documentation for the JavaScriptSerializer class at https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx. How to handle date and time objects using Json.NET Dates in JSON are problematic for people because JavaScript's dates are in milliseconds from the epoch, which are generally unreadable to people. Different JSON parsers handle this differently; Json.NET has a nice IsoDateTimeConverter that formats the date and time in ISO format, making it human-readable for debugging or parsing on platforms other than JavaScript. You can extend this method to converting any kind of formatted data in JSON attributes, too, by creating new converter objects and using the converter object to convert from one value type to another. How to do it… Simply include a new IsoDateTimeConverter object when you call JsonConvert.Serialize, like this: string json = JsonConvert.SerializeObject(p, newIsoDateTimeConverter()); How it works… This causes the serializer to invoke the IsoDateTimeConverter instance with any instance of date and time objects, returning ISO strings like this in your JSON: 2015-07-29T08:00:00 There's more… Note that this can be parsed by Json.NET, but not JavaScript; in JavaScript, you'll want to use a function like this: Function isoDateReviver(value) { if (typeof value === 'string') { var a = /^(d{4})-(d{2})-(d{2})T(d{2}):(d{2}):(d{2}(?:.d*)?)(?:([+-])(d{2}):(d{2}))?Z?$/ .exec(value); if (a) {      var utcMilliseconds = Date.UTC(+a[1],          +a[2] - 1,          +a[3],          +a[4],          +a[5],          +a[6]);        return new Date(utcMilliseconds);    } } return value; } The rather hairy regular expression on the third line matches dates in the ISO format, extracting each of the fields. If the regular expression finds a match, it extracts each of the date fields, which are then used by the Date class's UTC method to create a new date. Note that the entire regular expression—everything between the/characters—should be on one line with no whitespace. It's a little long for this page, however! See also For more information on how Json.NET handles dates and times, see the documentation and example at http://www.newtonsoft.com/json/help/html/SerializeDateFormatHandling.htm. How to deserialize an object using gson for Java Like Json.NET, gson provides a way to specify the destination class to which you're deserializing a JSON object. Getting ready You'll need to include the gson JAR file in your application, just as you would for any other external API. How to do it… You use the same method as you use for type-unsafe JSON parsing using gson using fromJson, except you pass the class object to gson as the second argument, like this: // Assuming we have a class Record that looks like this: /* class Record { private String call; private float lat; private float lng;    // public API would access these fields } */   Gson gson = new com.google.gson.Gson(); String json = "{ "call": "kf6gpe-9", "lat": 21.9749, "lng": 159.3686 }"; Record result = gson.fromJson(json, Record.class); How it works… The fromGson method always takes a Java class. In the example in this recipe, we convert directly to a plain old Java object that our application can use without needing to use the dereferencing and type conversion interface of JsonElement that gson provides. There's more… The gson library can also deal with nested types and arrays as well. You can also hide fields from being serialized or deserialized by declaring them transient, which makes sense because transient fields aren't serialized. See also The documentation for gson and its support for deserializing instances of classes is at https://sites.google.com/site/gson/gson-user-guide#TOC-Object-Examples. How to use TypeScript with Node.js Using TypeScript with Visual Studio is easy; it's just part of the installation of Visual Studio for any version after Visual Studio 2013 Update 2. Getting the TypeScript compiler for Node.js is almost as easy—it's an npm install away. How to do it… On a command line with npm in your path, run the following command: npm install –g typescript The npm option –g tells npm to install the TypeScript compiler globally, so it's available to every Node.js application you write. Once you run it, npm downloads and installs the TypeScript compiler binary for your platform. There's more… Once you run this command to install the compiler, you'll have the TypeScript compiler tsc available on the command line. Compiling a file with tsc is as easy as writing the source code and saving in a file that ends in .ts extension, and running tsc on it. For example, given the following TypeScript saved in the file hello.ts: function greeter(person: string) { return "Hello, " + person; }   var user: string = "Ray";   console.log(greeter(user)); Running tschello.ts at the command line creates the following JavaScript: function greeter(person) { return "Hello, " + person; }   var user = "Ray";   console.log(greeter(user)); Try it! As we'll see in the next section, the function declaration for greeter contains a single TypeScript annotation; it declares the argument person to be string. Add the following line to the bottom of hello.ts: console.log(greeter(2)); Now, run the tschello.ts command again; you'll get an error like this one: C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2082: Supplied parameters do not match any signature of call target:        Could not apply type 'string' to argument 1 which is         of type 'number'. C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2087: Could not select overload for 'call' expression. This error indicates that I'm attempting to call greeter with a value of the wrong type, passing a number where greeter expects a string. In the next recipe, we'll look at the kinds of type annotations TypeScript supports for simple types. See also The TypeScript home page, with tutorials and reference documentation, is at http://www.typescriptlang.org/. How to annotate simple types using TypeScript Type annotations with TypeScript are simple decorators appended to the variable or function after a colon. There's support for the same primitive types as in JavaScript, and to declare interfaces and classes, which we will discuss next. How to do it… Here's a simple example of some variable declarations and two function declarations: function greeter(person: string): string { return "Hello, " + person; }   function circumference(radius: number) : number { var pi: number = 3.141592654; return 2 * pi * radius; }   var user: string = "Ray";   console.log(greeter(user)); console.log("You need " + circumference(2) + " meters of fence for your dog."); This example shows how to annotate functions and variables. How it works… Variables—either standalone or as arguments to a function—are decorated using a colon and then the type. For example, the first function, greeter, takes a single argument, person, which must be a string. The second function, circumference, takes a radius, which must be a number, and declares a single variable in its scope, pi, which must be a number and has the value 3.141592654. You declare functions in the normal way as in JavaScript, and then add the type annotation after the function name, again using a colon and the type. So, greeter returns a string, and circumference returns a number. There's more… TypeScript defines the following fundamental type decorators, which map to their underlying JavaScript types: array: This is a composite type. For example, you can write a list of strings as follows: var list:string[] = [ "one", "two", "three"]; boolean: This type decorator can contain the values true and false. number: This type decorator is like JavaScript itself, can be any floating-point number. string: This type decorator is a character string. enum: An enumeration, written with the enum keyword, like this: enumColor { Red = 1, Green, Blue }; var c : Color = Color.Blue; any: This type indicates that the variable may be of any type. void: This type indicates that the value has no type. You'll use void to indicate a function that returns nothing. See also For a list of the TypeScript types, see the TypeScript handbook at http://www.typescriptlang.org/Handbook. How to declare interfaces using TypeScript An interface defines how something behaves, without defining the implementation. In TypeScript, an interface names a complex type by describing the fields it has. This is known as structural subtyping. How to do it… Declaring an interface is a little like declaring a structure or class; you define the fields in the interface, each with its own type, like this: interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686};   printLocation(myObj); How it works… The interface keyword in TypeScript defines an interface; as I already noted, an interface consists of the fields it declares with their types. In this listing, I defined a plain JavaScript object, myObj and then called the function printLocation, that I previously defined, which takes a Record. When calling printLocation with myObj, the TypeScript compiler checks the fields and types each field and only permits a call to printLocation if the object matches the interface. There's more… Beware! TypeScript can only provide compile-type checking. What do you think the following code does? interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686}; printLocation(myObj);   var json = '{"call":"kf6gpe-7","lat":21.9749}'; var myOtherObj = JSON.parse(json); printLocation(myOtherObj); First, this compiles with tsc just fine. When you run it with node, you'll see the following: kf6gpe-7: 21.9749, 159.3686 kf6gpe-7: 21.9749, undefined What happened? The TypeScript compiler does not add run-time type checking to your code, so you can't impose an interface on a run-time created object that's not a literal. In this example, because the lng field is missing from the JSON, the function can't print it, and prints the value undefined instead. This doesn't mean that you shouldn't use TypeScript with JSON, however. Type annotations serve a purpose for all readers of the code, be they compilers or people. You can use type annotations to indicate your intent as a developer, and readers of the code can better understand the design and limitation of the code you write. See also For more information about interfaces, see the TypeScript documentation at http://www.typescriptlang.org/Handbook#interfaces. How to declare classes with interfaces using TypeScript Interfaces let you specify behavior without specifying implementation; classes let you encapsulate implementation details behind an interface. TypeScript classes can encapsulate fields or methods, just as classes in other languages. How to do it… Here's an example of our Record structure, this time as a class with an interface: class RecordInterface { call: string; lat: number; lng: number;   constructor(c: string, la: number, lo: number) {} printLocation() {}   }   class Record implements RecordInterface { call: string; lat: number; lng: number; constructor(c: string, la: number, lo: number) {    this.call = c;    this.lat = la;    this.lng = lo; }   printLocation() {    console.log(this.call + ': ' + this.lat + ', ' + this.lng); } }   var myObj : Record = new Record('kf6gpe-7', 21.9749, 159.3686);   myObj.printLocation(); How it works… The interface keyword, again, defines an interface just as the previous section shows. The class keyword, which you haven't seen before, implements a class; the optional implements keyword indicates that this class implements the interface RecordInterface. Note that the class implementing the interface must have all of the same fields and methods that the interface prescribes; otherwise, it doesn't meet the requirements of the interface. As a result, our Record class includes fields for call, lat, and lng, with the same types as in the interface, as well as the methods constructor and printLocation. The constructor method is a special method called when you create a new instance of the class using new. Note that with classes, unlike regular objects, the correct way to create them is by using a constructor, rather than just building them up as a collection of fields and values. We do that on the second to the last line of the listing, passing the constructor arguments as function arguments to the class constructor. See also There's a lot more you can do with classes, including defining inheritance and creating public and private fields and methods. For more information about classes in TypeScript, see the documentation at http://www.typescriptlang.org/Handbook#classes. Using json2ts to generate TypeScript interfaces from your JSON This last recipe is more of a tip than a recipe; if you've got some JSON you developed using another programming language or by hand, you can easily create a TypeScript interface for objects to contain the JSON by using Timmy Kokke's json2ts website. How to do it… Simply go to http://json2ts.com and paste your JSON in the box that appears, and click on the generate TypeScript button. You'll be rewarded with a second text-box that appears and shows you the definition of the TypeScript interface, which you can save as its own file and include in your TypeScript applications. How it works… The following figure shows a simple example: You can save this typescript as its own file, a definition file, with the suffix .d.ts, and then include the module with your TypeScript using the import keyword, like this: import module = require('module'); Summary In this article we looked at how you can adapt the type-free nature of JSON with the type safety provided by languages such as C#, Java, and TypeScript to reduce programming errors in your application. Resources for Article: Further resources on this subject: Playing with Swift [article] Getting Started with JSON [article] Top two features of GSON [article]
Read more
  • 0
  • 0
  • 6351

article-image-auto-generate-texts-shakespeare-writing-using-deep-recurrent-neural-networks
Savia Lobo
16 Feb 2018
6 min read
Save for later

How to auto-generate texts from Shakespeare writing using deep recurrent neural networks

Savia Lobo
16 Feb 2018
6 min read
[box type="note" align="" class="" width=""]Our article is an excerpt from a book co-authored by Krishna Bhavsar, Naresh Kumar, and Pratap Dangeti, titled as Natural Language Processing with Python Cookbook. This book will give unique recipes to know various aspects of performing Natural Language Processing with NLTK—a leading Python platform for NLP.[/box] Today we will learn to use deep recurrent neural networks (RNN) to predict the next character based on the given length of a sentence. This way of training a model is able to generate automated text continuously, which can imitate the writing style of the original writer with enough training on the number of epochs and so on. Getting ready... The Project Gutenberg eBook of the complete works of William Shakespeare's dataset is used to train the network for automated text generation. Data can be downloaded from http:// www.gutenberg.org/ for the raw file used for training: >>> from  future import print_function >>> import numpy as np >>> import random >>> import sys The following code is used to create a dictionary of characters to indices and vice-versa mapping, which we will be using to convert text into indices at later stages. This is because deep learning models cannot understand English and everything needs to be mapped into indices to train these models: >>> path = 'C:UsersprataDocumentsbook_codes NLP_DL shakespeare_final.txt' >>> text = open(path).read().lower() >>> characters = sorted(list(set(text))) >>> print('corpus length:', len(text)) >>> print('total chars:', len(characters)) >>> char2indices = dict((c, i) for i, c in enumerate(characters)) >>> indices2char = dict((i, c) for i, c in enumerate(characters)) How to do it… Before training the model, various preprocessing steps are involved to make it work. The following are the major steps involved: Preprocessing: Prepare X and Y data from the given entire story text file and converting them into indices vectorized format. Deep learning model training and validation: Train and validate the deep learning model. Text generation: Generate the text with the trained model. How it works... The following lines of code describe the entire modeling process of generating text from Shakespeare's writings. Here we have chosen character length. This needs to be considered as 40 to determine the next best single character, which seems to be very fair to consider. Also, this extraction process jumps by three steps to avoid any overlapping between two consecutive extractions, to create a dataset more fairly: # cut the text in semi-redundant sequences of maxlen characters >>> maxlen = 40 >>> step = 3 >>> sentences = [] >>> next_chars = [] >>> for i in range(0, len(text) - maxlen, step): ... sentences.append(text[i: i + maxlen]) ... next_chars.append(text[i + maxlen]) ... print('nb sequences:', len(sentences)) The following screenshot depicts the total number of sentences considered, 193798, which is enough data for text generation: The next code block is used to convert the data into a vectorized format for feeding into deep learning models, as the models cannot understand anything about text, words, sentences and so on. Initially, total dimensions are created with all zeros in the NumPy array and filled with relevant places with dictionary mappings: # Converting indices into vectorized format >>> X = np.zeros((len(sentences), maxlen, len(characters)), dtype=np.bool) >>> y = np.zeros((len(sentences), len(characters)), dtype=np.bool) >>> for i, sentence in enumerate(sentences): ... for t, char in enumerate(sentence): ... X[i, t, char2indices[char]] = 1 ... y[i, char2indices[next_chars[i]]] = 1 >>> from keras.models import Sequential >>> from keras.layers import Dense, LSTM,Activation,Dropout >>> from keras.optimizers import RMSprop The deep learning model is created with RNN, more specifically Long Short-Term Memory networks with 128 hidden neurons, and the output is in the dimensions of the characters. The number of columns in the array is the number of characters. Finally, the softmax function is used with the RMSprop optimizer. We encourage readers to try with other various parameters to check out how results vary: #Model Building >>> model = Sequential() >>> model.add(LSTM(128, input_shape=(maxlen, len(characters)))) >>> model.add(Dense(len(characters))) >>> model.add(Activation('softmax')) >>> model.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.01)) >>> print (model.summary()) As mentioned earlier, deep learning models train on number indices to map input to output (given a length of 40 characters, the model will predict the next best character). The following code is used to convert the predicted indices back to the relevant character by determining the maximum index of the character: # Function to convert prediction into index >>> def pred_indices(preds, metric=1.0): ... preds = np.asarray(preds).astype('float64') ... preds = np.log(preds) / metric ... exp_preds = np.exp(preds) ... preds = exp_preds/np.sum(exp_preds) ... probs = np.random.multinomial(1, preds, 1) ... return np.argmax(probs) The model will be trained over 30 iterations with a batch size of 128. And also, the diversity has been changed to see the impact on the predictions: # Train and Evaluate the Model >>> for iteration in range(1, 30): ... print('-' * 40) ... print('Iteration', iteration) ... model.fit(X, y,batch_size=128,epochs=1).. ... start_index = random.randint(0, len(text) - maxlen - 1) ... for diversity in [0.2, 0.7,1.2]: ... print('n----- diversity:', diversity) ... generated = '' ... sentence = text[start_index: start_index + maxlen] ... generated += sentence ... print('----- Generating with seed: "' + sentence + '"') ... sys.stdout.write(generated) ... for i in range(400): ... x = np.zeros((1, maxlen, len(characters))) ... for t, char in enumerate(sentence): ... x[0, t, char2indices[char]] = 1. ... preds = model.predict(x, verbose=0)[0] ... next_index = pred_indices(preds, diversity) ... pred_char = indices2char[next_index] ... generated += pred_char ... sentence = sentence[1:] + pred_char ... sys.stdout.write(pred_char) ... sys.stdout.flush() ... print("nOne combination completed n") The results are shown in the next screenshot to compare the first iteration (Iteration 1) and final iteration (Iteration 29). It is apparent that with enough training, the text generation seems to be much better than with Iteration 1: Text generation after Iteration 29 is shown in this image: Though the text generation seems to be magical, we have generated text using Shakespeare's writings, proving that with the right training and handling, we can imitate any style of writing of a particular writer. If you found this post useful, you may check out this book Natural Language Processing with Python Cookbook to analyze sentence structure and master lexical analysis, syntactic and semantic analysis, pragmatic analysis, and other NLP techniques.  
Read more
  • 0
  • 0
  • 6349

article-image-exception-handling-python
Packt
17 Aug 2016
10 min read
Save for later

Exception Handling with Python

Packt
17 Aug 2016
10 min read
In this article, by Ninad Sathaye, author of the book, Learning Python Application Development, you will learn techniques to make the application more robust by handling exceptions Specifically, we will cover the following topics: What are the exceptions in Python? Controlling the program flow with the try…except clause Dealing with common problems by handling exceptions Creating and using custom exception classes (For more resources related to this topic, see here.) Exceptions Before jumping straight into the code and fixing these issues, let's first understand what an exception is and what we mean by handling an exception. What is an exception? An exception is an object in Python. It gives us information about an error detected during the program execution. The errors noticed while debugging the application were unhandled exceptions as we didn't see those coming. Later in the article,you will learn the techniques to handle these exceptions. The ValueError and IndexErrorexceptions seen in the earlier tracebacks are examples of built-in exception types in Python. In the following section, you will learn about some other built-in exceptions supported in Python. Most common exceptions Let's quickly review some of the most frequently encountered exceptions. The easiest way is to try running some buggy code and let it report the problem as an error traceback! Start your Python interpreter and write the following code: Here are a few more exceptions: As you can see, each line of the code throws a error tracebackwith an exception type (shown highlighted). These are a few of the built-in exceptions in Python. A comprehensive list of built-in exceptions can be found in the following documentation:https://docs.python.org/3/library/exceptions.html#bltin-exceptions Python provides BaseException as the base class for all built-in exceptions. However, most of the built-in exceptions do not directly inherit BaseException. Instead, these are derived from a class called Exception that in turn inherits from BaseException. The built-in exceptions that deal with program exit (for example, SystemExit) are derived directly from BaseException. You can also create your own exception class as a subclass of Exception. You will learn about that later in this article. Exception handling So far, we saw how the exceptions occur. Now, it is time to learn how to use thetry…except clause to handle these exceptions. The following pseudocode shows a very simple example of the try…except clause: Let's review the preceding code snippet: First, the program tries to execute the code inside thetryclause. During this execution, if something goes wrong (if an exception occurs), it jumps out of this tryclause. The remaining code in the try block is not executed. It then looks for an appropriate exception handler in theexceptclause and executes it. The exceptclause used here is a universal one. It will catch all types of exceptions occurring within thetryclause. Instead of having this "catch-all" handler, a better practice is to catch the errors that you anticipate and write an exception handling code specific to those errors. For example, the code in thetryclause might throw an AssertionError. Instead of using the universalexcept clause, you can write a specific exception handler, as follows: Here, we have an except clause that exclusively deals with AssertionError. What it also means is that any error other than the AssertionError will slip through as an unhandled exception. For that, we need to define multipleexceptclauses with different exception handlers. However, at any point of time, only one exception handler will be called. This can be better explained with an example. Let's take a look at the following code snippet: Thetry block calls solve_something(). This function accepts a number as a user input and makes an assertion that the number is greater than zero. If the assertion fails, it jumps directly to the handler, except AssertionError. In the other scenario, with a > 0, the rest of the code in solve_something() is executed. You will notice that the variable xis not defined, which results in NameError. This exception is handled by the other exception clause, except NameError. Likewise, you can define specific exception handlers for anticipated errors. Raising and re-raising an exception Theraisekeyword in Python is used to force an exception to occur. Put another way, it raises an exception. The syntax is simple; just open the Python interpreter and type: >>> raise AssertionError("some error message") This produces the following error traceback: Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError : some error message In some situations, we need to re-raise an exception. To understand this concept better, here is a trivial scenario. Suppose, in thetryclause, you have an expression that divides a number by zero. In ordinary arithmetic, this expression has no meaning. It's a bug! This causes the program to raise an exception called ZeroDivisionError. If there is no exception handling code, the program will just print the error message and terminate. What if you wish to write this error to some log file and then terminate the program? Here, you can use anexceptclause to log the error first. Then, use theraisekeyword without any arguments to re-raise the exception. The exception will be propagated upwards in the stack. In this example, it terminates the program. The exception can be re-raised with the raise keyword without any arguments. Here is an example that shows how to re-raise an exception: As can be seen, adivision by zeroexception is raised while solving the a/b expression. This is because the value of variable b is set to 0. For illustration purposes, we assumed that there is no specific exception handler for this error. So, we will use the general except clause where the exception is re-raised after logging the error. If you want to try this yourself, just write the code illustrated earlier in a new Python file, and run it from a terminal window. The following screenshot shows the output of the preceding code: The else block of try…except There is an optionalelseblock that can be specified in the try…except clause. The elseblock is executed only ifno exception occurs in the try…except clause. The syntax is as follows: Theelseblock is executed before thefinallyclause, which we will study next. finally...clean it up! There is something else to add to the try…except…else story:an optional finally clause. As the name suggests, the code within this clause is executed at the end of the associated try…except block. Whether or not an exception is raised, the finally clause, if specified, willcertainly get executed at the end of thetry…except clause. Imagine it as anall-weather guaranteegiven by Python! The following code snippet shows thefinallyblock in action: Running this simple code will produce the following output: $ python finally_example1.py Enter a number: -1 Uh oh..Assertion Error. Do some special cleanup The last line in the output is theprintstatement from the finally clause. The code snippets with and without the finally clause are are shown in the following screenshot. The code in the finallyclause is assured to be executed in the end, even when the except clause instructs the code to return from the function. Thefinallyclause is typically used to perform clean-up tasks before leaving the function. An example use case is to close a database connection or a file. However, note that, for this purpose you can also use thewith statement in Python. Writing a new exception class It is trivial to create a new exception class derived from Exception. Open your Python interpreter and create the following class: >>> class GameUnitError(Exception): ... pass ... >>> That's all! We have a new exception class,GameUnitError, ready to be deployed. How to test this exception? Just raise it. Type the following line of code in your Python interpreter: >>> raise GameUnitError("ERROR: some problem with game unit") Raising the newly created exception will print the following traceback: >>> raise GameUnitError("ERROR: some problem with game unit") Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.GameUnitError: ERROR: some problem with game unit Copy the GameUnitError class into its own module, gameuniterror.py, and save it in the same directory as attackoftheorcs_v1_1.py. Next, update the attackoftheorcs_v1_1.py file to include the following changes: First, add the following import statement at the beginning of the file: from gameuniterror import GameUnitError The second change is in the AbstractGameUnit.heal method. The updated code is shown in the following code snippet. Observe the highlighted code that raises the custom exception whenever the value ofself.health_meterexceeds that of self.max_hp. With these two changes, run heal_exception_example.py created earlier. You will see the new exception being raised, as shown in the following screenshot: Expanding the exception class Can we do something more with the GameUnitError class? Certainly! Just like any other class, we can define attributes and use them. Let's expand this class further. In the modified version, it will accept an additional argument and some predefined error code. The updated GameUnitError class is shown in the following screenshot: Let's take a look at the code in the preceding screenshot: First, it calls the __init__method of the Exceptionsuperclass and then defines some additional instance variables. A new dictionary object,self.error_dict, holds the error integer code and the error information as key-value pairs. The self.error_message stores the information about the current error depending on the error code provided. The try…except clause ensures that error_dict actually has the key specified by thecodeargument. It doesn't in the except clause, we just retrieve the value with default error code of 000. So far, we have made changes to the GameUnitError class and the AbstractGameUnit.heal method. We are not done yet. The last piece of the puzzle is to modify the main program in the heal_exception_example.py file. The code is shown in the following screenshot: Let's review the code: As the heal_by value is too large, the heal method in the try clause raises the GameUnitError exception. The new except clause handles the GameUnitError exception just like any other built-in exceptions. Within theexceptclause, we have twoprintstatements. The first one prints health_meter>max_hp!(recall that when this exception was raised in the heal method, this string was given as the first argument to the GameUnitError instance). The second print statement retrieves and prints the error_message attribute of the GameUnitError instance. We have got all the changes in place. We can run this example form a terminal window as: $ python heal_exception_example.py The output of the program is shown in the following screenshot: In this simple example, we have just printed the error information to the console. You can further write verbose error logs to a file and keep track of all the error messages generated while the application is running. Summary This article served as an introduction to the basics of exception handling in Python. We saw how the exceptions occur, learned about some common built-in exception classes, and wrote simple code to handle these exceptions using thetry…except clause. The article also demonstrated techniques, such as raising and re-raising exceptions, using thefinally clause, and so on. The later part of the article focused on implementing custom exception classes. We defined a new exception class and used it for raising custom exceptions for our application. With exception handling, the code is in a better shape. Resources for Article: Further resources on this subject: Mining Twitter with Python – Influence and Engagement [article] Exception Handling in MySQL for Python [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 6349
article-image-run-lambda-functions-on-aws-greengrass
Vijin Boricha
27 Apr 2018
7 min read
Save for later

How to run Lambda functions on AWS Greengrass

Vijin Boricha
27 Apr 2018
7 min read
AWS Greengrass is a form of edge computing service that extends the cloud's functionality to your IoT devices by allowing data collection and analysis closer to its point of origin. This is accomplished by executing AWS Lambda functions locally on the IoT device itself, while still using the cloud for management and analytics. Today, we will learn how to leverage AWS Greengrass to run simple lambda functions on an IoT device. How does this help a business? Well to start with, using AWS Greengrass you are now able to respond to locally generated events in near real time. With Greengrass, you can program your IoT devices to locally process and filter data and only transmit the important chunks back to AWS for analysis. This also has a direct impact on the costs as well as the amount of data transmitted back to the cloud. Here are the core components of AWS Greengrass: Greengrass Core (GGC) software: The Greengrass Core software is a packaged module that consists of a runtime to allow executions of Lambda functions, locally. It also contains an internal message broker and a deployment agent that periodically notifies the AWS Greengrass service about the device's configuration, state, available updates, and so on. The software also ensures that the connection between the device and the IoT service is secure with the help of keys and certificates. Greengrass groups: A Greengrass group is a collection of Greengrass Core settings and definitions that are used to manage one or more Greengrass-backed IoT devices. The groups internally comprise a few other components, namely: Greengrass group definition: A collection of information about your Greengrass group Device definition: A collection of IoT devices that are a part of a Greengrass group Greengrass group settings: Contains connection as well as configuration information along with the necessary IAM Roles required for interacting with other AWS services Greengrass Core: The IoT device itself Lambda functions: A list of Lambda functions that can be deployed to the Greengrass Core. Subscriptions: A collection of a message source, a message target and an MQTT topic to transmit the messages. The source or targets can be either the IoT service, a Lambda function or even the IoT device itself. Greengrass Core SDK: Greengrass also provides an SDK which you can use to write and run Lambda functions on Greengrass Core devices. The SDK currently supports Java 8, Python 2.7, and Node.js 6.10. With this key information in mind, let's go ahead and deploy our very own Greengrass Core on an IoT device. Running Lambda functions on AWS Greengrass With the Greengrass Core software up and running on your IoT device, we can now go ahead and run a simple Lambda function on it! For this particular section, we will be leveraging an AWS Lambda blueprint that prints a simple Hello World message: To get started, first, we will need to create our Lambda function. From the AWS Management Console, filter out the Lambda service using the Filter option or alternatively, select this URL: https://console.aws.amazon.com/lambda/home. Ensure that the Lambda function is launched from the same region as that of the AWS Greengrass. In this case, we are using the US-East-1 (N. Virginia) region. On the AWS Lambda console landing page, select the Create function option to get started. Since we are going to be leveraging an existing function blueprint for this use case, select the Blueprints option provided on the Create function page. Use the filter to find a blueprint with the name greengrass-hello-world. There are two templates present to date that match this name, one function is based on Python while the other is based on Node.js. For this particular section, select the greengrass-hello-world Python function and click on Configure to proceed. Fill out the required details for the new function, such as a Name followed by a valid Role. For this section, go ahead and select the Create new role from template option. Provide a suitable Role name and finally, from the Policy templates drop-down list, select the AWS IoT Button Permissions role. Once completed, click on Create function to complete the function's creation process. But before you move on to associating this function with your AWS Greengrass, you will also need to create a new version out of this function. Select the Publish new version option from the Actions tab. Provide a suitable Version description text and click on Publish once done. Your function is now ready for AWS Greengrass. Now, head back to the AWS IoT dashboard and select the newly deployed Greengrass group from the Groups option present on the navigation pane. From the Greengrass group page, select the Lambdas option from the navigation pane followed by the Add Lambda option, as shown in the following screenshot: On the Add a Lambda to your Greengrass group, you can choose to either Create a new Lambda function or Use an existing Lambda function as well. Since we have already created our function, select the Use existing function option. In the next page, select your Greengrass Lambda function and click Next to proceed. Finally, select the version of the deployed function and click on Finish once done. To finish things, we will need to create a new subscription between the Lambda function (source) and the AWS IoT service (destination). Select the Subscriptions option from the same Greengrass group page, as shown. Click on Add Subscription to proceed: On the Select your source and target page, select the newly deployed Lambda function as the source, followed by the IoT cloud as the target. Click on Next once done. You can provide an Optional topic filter as well, to filter messages published on the messaging queue. In this case, we have provided a simple hello/world as the filter for this scenario. Click on Finish once done to complete the subscription configuration. With all the pieces in place, it's now time to deploy our Lambda function over to the Greengrass Core. To do so, select the Deployments option and from the Actions drop-down list, select the Deploy option, as shown in the following screenshot: The deployment takes a few seconds to complete. Once done, verify the status of the deployment by viewing the Status column. The Status should show Successfully completed. With the function now deployed, test the setup by using the MQTT client provided by AWS IoT, as done before. Remember to enter the same hello/world topic name in the subscription topic field and click on Publish to topic once done. If all goes well, you should receive a custom Hello World message from the Greengrass Core as depicted in the following screenshot: This was just a high-level view of what you can achieve with Greengrass and Lambda. You can leverage Lambda for performing all kinds of preprocessing on data on your IoT device itself, thus saving a tremendous amount of time, as well as costs. With this, we come to the end of this post. Stay tuned for our next post where we will look at ways to effectively monitor IoT devices. We leveraged AWS Greengrass and Lambda to develop a cost-effective and speedy solution. You read an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia.  Whether you are a seasoned system admin or a rookie, this book will help you learn all the skills you need to work with the AWS cloud.  
Read more
  • 0
  • 0
  • 6347

Packt
12 Jan 2016
11 min read
Save for later

Façade Pattern – Being Adaptive with Façade

Packt
12 Jan 2016
11 min read
In this article by Chetan Giridhar, author of the book, Learning Python Design Patterns - Second Edition, we will get introduced to the Façade design pattern and how it is used in software application development. We will work with a sample use case and implement it in Python v3.5. In brief, we will cover the following topics in this article: An understanding of the Façade design pattern with a UML diagram A real-world use case with the Python v3.5 code implementation The Façade pattern and principle of least knowledge (For more resources related to this topic, see here.) Understanding the Façade design pattern Façade is generally referred to as the face of the building, especially an attractive one. It can be also referred to as a behavior or appearance that gives a false idea of someone's true feelings or situation. When people walk past a façade, they can appreciate the exterior face but aren't aware of the complexities of the structure within. This is how a façade pattern is used. Façade hides the complexities of the internal system and provides an interface to the client that can access the system in a very simplified way. Consider an example of a storekeeper. Now, when you, as a customer, visit a store to buy certain items, you're not aware of the layout of the store. You typically approach the storekeeper who is well aware of the store system. Based on your requirements, the storekeeper picks up items and hands them over to you. Isn't this easy? The customer need not know how the store looks and s/he gets the stuff done through a simple interface, the storekeeper. The Façade design pattern essentially does the following: It provides a unified interface to a set of interfaces in a subsystem and defines a high-level interface that helps the client use the subsystem in an easy way. Façade discusses representing a complex subsystem with a single interface object. It doesn't encapsulate the subsystem but actually combines the underlying subsystems. It promotes the decoupling of the implementation with multiple clients. A UML class diagram We will now discuss the Façade pattern with the help of the following UML diagram: As we observe the UML diagram, you'll realize that there are three main participants in this pattern: Façade: The main responsibility of a façade is to wrap up a complex group of subsystems so that it can provide a pleasing look to the outside world. System: This represents a set of varied subsystems that make the whole system compound and difficult to view or work with. Client: The client interacts with the Façade so that it can easily communicate with the subsystem and get the work completed. It doesn't have to bother about the complex nature of the system. You will now learn a little more about the three main participants from the data structure's perspective. Façade The following points will give us a better idea of Façade: It is an interface that knows which subsystems are responsible for a request It delegates the client's requests to the appropriate subsystem objects using composition For example, if the client is looking for some work to be accomplished, it need not have to go to individual subsystems but can simply contact the interface (Façade) that gets the work done System In the Façade world, System is an entity that performs the following: It implements subsystem functionality and is represented by a class. Ideally, a System is represented by a group of classes that are responsible for different operations. It handles the work assigned by the Façade object but has no knowledge of the façade and keeps no reference to it. For instance, when the client requests the Façade for a certain service, Façade chooses the right subsystem that delivers the service based on the type of service Client Here's how we can describe the client: The client is a class that instantiates the Façade It makes requests to the Façade to get the work done from the subsystems Implementing the Façade pattern in the real world To demonstrate the applications of the Façade pattern, let's take an example that we'd have experienced in our lifetime. Consider that you have a marriage in your family and you are in charge of all the arrangements. Whoa! That's a tough job on your hands. You have to book a hotel or place for marriage, talk to a caterer for food arrangements, organize a florist for all the decorations, and finally handle the musical arrangements expected for the event. In yesteryears, you'd have done all this by yourself, such as talking to the relevant folks, coordinating with them, negotiating on the pricing, but now life is simpler. You go and talk to an event manager who handles this for you. S/he will make sure that they talk to the individual service providers and get the best deal for you. From the Façade pattern perspective we will have the following three main participants: Client: It's you who need all the marriage preparations to be completed in time before the wedding. They should be top class and guests should love the celebrations. Façade: The event manager who's responsible for talking to all the folks that need to work on specific arrangements such as food, flower decorations, among others Subsystems: They represent the systems that provide services such as catering, hotel management, and flower decorations Let's develop an application in Python v3.5 and implement this use case. We start with the client first. It's you! Remember, you're the one who has been given the responsibility to make sure that the marriage preparations are done and the event goes fine! However, you're being clever here and passing on the responsibility to the event manager, isn't it? Let's now look at the You class. In this example, you create an object of the EventManager class so that the manager can work with the relevant folks on marriage preparations while you relax. class You(object):     def __init__(self):         print("You:: Whoa! Marriage Arrangements??!!!")     def askEventManager(self):         print("You:: Let's Contact the Event Manager\n\n")         em = EventManager()         em.arrange()     def __del__(self):         print("You:: Thanks to Event Manager, all preparations done! Phew!") Let's now move ahead and talk about the Façade class. As discussed earlier, the Façade class simplifies the interface for the client. In this case, EventManager acts as a façade and simplifies the work for You. Façade talks to the subsystems and does all the booking and preparations for the marriage on your behalf. Here is the Python code for the EventManager class: class EventManager(object):         def __init__(self):         print("Event Manager:: Let me talk to the folks\n")         def arrange(self):         self.hotelier = Hotelier()         self.hotelier.bookHotel()                 self.florist = Florist()         self.florist.setFlowerRequirements()                  self.caterer = Caterer()         self.caterer.setCuisine()                 self.musician = Musician()         self.musician.setMusicType() Now that we're done with the Façade and client, let's dive into the subsystems. We have developed the following classes for this scenario: Hotelier is for the hotel bookings. It has a method to check whether the hotel is free on that day (__isAvailable) and if it is free for booking the Hotel (bookHotel). The Florist class is responsible for flower decorations. Florist has the setFlowerRequirements() method to be used to set the expectations on the kind of flowers needed for the marriage decoration. The Caterer class is used to deal with the caterer and is responsible for the food arrangements. Caterer exposes the setCuisine() method to accept the type of cuisine to be served at the marriage. The Musician class is designed for musical arrangements at the marriage. It uses the setMusicType() method to understand the music requirements for the event. class Hotelier(object):     def __init__(self):         print("Arranging the Hotel for Marriage? --")         def __isAvailable(self):         print("Is the Hotel free for the event on given day?")         return True       def bookHotel(self):         if self.__isAvailable():             print("Registered the Booking\n\n")     class Florist(object):     def __init__(self):         print("Flower Decorations for the Event? --")         def setFlowerRequirements(self):         print("Carnations, Roses and Lilies would be used for Decorations\n\n")     class Caterer(object):     def __init__(self):         print("Food Arrangements for the Event --")         def setCuisine(self):         print("Chinese & Continental Cuisine to be served\n\n")     class Musician(object):     def __init__(self):         print("Musical Arrangements for the Marriage --")         def setMusicType(self):         print("Jazz and Classical will be played\n\n")   you = You() you.askEventManager() The output of the preceding code is given here: In the preceding code example: The EventManager class is the Façade that simplifies the interface for You EventManager uses composition to create objects of the subsystems such as Hotelier, Caterer, and others The principle of least knowledge As you have learned in the initial parts of this article, the Façade provides a unified system that makes subsystems easy to use. It also decouples the client from the subsystem of components. The design principle that is employed behind the Façade pattern is the principle of least knowledge. The principle of least knowledge guides us to reduce the interactions between objects to just a few friends that are close enough to you. In real terms, it means the following:: When designing a system, for every object created, one should look at the number of classes that it interacts with and the way in which the interaction happens. Following the principle, make sure that we avoid situations where there are many classes created tightly coupled to each other. If there are a lot of dependencies between classes, the system becomes hard to maintain. Any changes in one part of the system can lead to unintentional changes to other parts of the system, which means that the system is exposed to regressions and this should be avoided. Summary We began the article by first understanding the Façade design pattern and the context in which it's used. We understood the basis of Façade and how it is effectively used in software architecture. We looked at how Façade design patterns create a simplified interface for clients to use. It simplifies the complexity of subsystems so that the client benefits. The Façade doesn't encapsulate the subsystem and the client is free to access the subsystems even without going through the Façade. You also learned the pattern with a UML diagram and sample code implementation in Python v3.5. We understood the principle of least knowledge and how its philosophy governs the Façade design patterns. Further resources on this subject: Asynchronous Programming with Python [article] Optimization in Python [article] The Essentials of Working with Python Collections [article]
Read more
  • 0
  • 0
  • 6344

article-image-lighting-basics
Packt
19 Feb 2016
5 min read
Save for later

Lighting basics

Packt
19 Feb 2016
5 min read
In this article by Satheesh PV, author of the book Unreal Engine 4 Game Development Essentials, we will learn that lighting is an important factor in your game, which can be easily overlooked, and wrong usage can severely impact on performance. But with proper settings, combined with post process, you can create very beautiful and realistic scenes. We will see how to place lights and how to adjust some important values. (For more resources related to this topic, see here.) Placing lights In Unreal Engine 4, lights can be placed in two different ways. Through the modes tab or by right-clicking in the level: Modes tab: In the Modes tab, go to place tab (Shift + 1) and go to the Lights section. From there you can drag and drop various lights. Right-clicking: Right-click in viewport and in Place Actor you can select your light. Once a light is added to the level, you can use transform tool (W to move, E to rotate) to change the position and rotation of your selected light. Since Directional Light casts light from an infinite source, updating their location has no effect. Various lights Unreal Engine 4 features four different types of light Actors. They are: Directional Light: Simulates light from a source that is infinitely far away. Since all shadows casted by this light will be parallel, this is the ideal choice for simulating sunlight. Spot Light: Emits light from a single point in a cone shape. There are two cones (inner cone and outer cone). Within inner cone, light achieves full brightness and between inner and outer cone a falloff takes place, which softens the illumination. That means after inner cone, light slowly loses its illumination as it goes to outer cone. Point Light: Emits light from a single point to all directions, much like a real-world light bulb. Sky Light: Does not really emit light, but instead captures the distant parts of your scene (for example, Actors that are placed beyond Sky Distance Threshold) and applies them as light. That means you can have lights coming from your atmosphere, distant mountains, and so on. Note that Sky Light will only update when you rebuild your lighting or by pressing Recapture Scene (in the Details panel with Sky Light selected). Common light settings Now that we know how to place lights into a scene, let's take a look at some of the common settings of a light. Select your light in a scene and in the Details panel you will see these settings: Intensity: Determines the intensity (energy) of the light. This is in lumens units so, for example, 1700 lm (lumen units) corresponds to a 100 W bulb. Light Color: Determines the color of light. Attenuation Radius: Sets the limit of light. It also calculates the falloff of light. This setting is only available in Point Lights and Spot Lights. Attenuation Radius from left to right: 100, 200, 500. Source Radius: Defines the size of specular highlights on surfaces. This effect can be subdued by adjusting the Min Roughness setting. This also affects building light using Lightmass. Larger Source Radius will cast softer shadows. Since this is processed by Lightmass, it will only work on Lights with mobility set to Static Source Radius 0. Notice the sharp edges of the shadow. Source Radius 0. Notice the sharp edges of the shadow. Source Length: Same as Source Radius. Light mobility Light mobility is an important setting to keep in mind when placing lights in your level because this changes the way light works and impacts on performance. There are three settings that you can choose. They are as follows: Static: A completely static light that has no impact on performance. This type of light will not cast shadows or specular on dynamic objects (for example, characters, movable objects, and so on). Example usage: Use this light where the player will never reach, such as distant cityscapes, ceilings, and so on. You can literally have millions of lights with static mobility. Stationary: This is a mix of static and dynamic light and can change its color and brightness while running the game, but cannot move or rotate. Stationary lights can interact with dynamic objects and is used where the player can go. Movable: This is a completely dynamic light and all properties can be changed at runtime. Movable lights are heavier on performance so use them sparingly. Only four or fewer stationary lights are allowed to overlap each other. If you have more than four stationary lights overlapping each other the light icon will change to red X, which indicates that the light is using dynamic shadows at a severe performance cost! In the following screenshot, you can easily see the overlapping light. Under View Mode, you can change to Stationary Light Overlap to see which light is causing an issue. Summary We will look into different light mobilities and learn more about Lightmass Global Illumination, which is the static Global Illumination solver created by Epic games. We will also learn how to prepare assets to be used with it. Resources for Article:   Further resources on this subject: Understanding Material Design [article] Build a First Person Shooter [article] Machine Learning With R [article]
Read more
  • 0
  • 0
  • 6340
article-image-setting-environment-aspnet-mvc-6
Packt
02 Nov 2016
9 min read
Save for later

Setting Up the Environment for ASP.NET MVC 6

Packt
02 Nov 2016
9 min read
In this article by Mugilan TS Raghupathi author of the book Learning ASP.NET Core MVC Programming explains the setup for getting started with programming in ASP.NET MVC 6. In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving the environment issues or configuration problems. With respect to .NET, Visual Studio is the de-facto standard IDE (Integrated Development Environment) for building web applications in .NET. In this article, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure (For more resources related to this topic, see here.) Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile, and execute it. When you develop a web application, you might need the following things for you to be productive: Code editor: This is the text editor where you type your code. Your code-editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color. Intellisense: Intellisense is a context aware code-completion feature available in most of the modern IDEs including Visual Studio. One such example is, when you type a dot after an object, this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out of the box to build a separate project or to build the complete solution at a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of templates. Visual Studio generates different kinds of templates with the code for ASP.Net Web Forms, MVC, and Web API to get you up and running. Ease of addition of items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue along with the information and help you to fix the issues. Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of the developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional System requirements Visual Studio can be installed on computers installed with Operation System Windows 7 Service Pack1 and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requirements-vs.aspx Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers. Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers which contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team. Visual Studio Enterprise Visual Studio Enterprise is a full blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing. Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or the people who are involved in the testing which might include developers. In any software development methodology—either the waterfall model or agile—developers need to execute the development suite test cases for the code they are developing. Installation of Visual Studio Community Follow the given steps to install Visual Studio Community 2015: Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later: Run the downloaded executable file: Click on Run and the following screen will appear: There are two types of installation—default and custom installation. Default installation installs the most commonly used features and this will cover most of the use cases of the developer. Custom installation helps you to customize the components that you want to get installed, such as the following: Click on the Install button after selecting the installation type. Depending on your memory and processor speed, it will take 1 to 2 hours to install. Once all the components are installed, you will see the following Setup completed screen: Installation of ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will not have been installed by default. As the ASP.NET MVC 6 application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/ Another option is to install from the New Project template in Visual Studio This option is bit easier as you don't need to search and install. The following are the detailed steps: Create a new project by selecting File | New Project or using the shortcut Ctrl + Shift + N: Select ASP.NET Web Application and enter the project name and click on OK: The following window will appear to select the template. Select the Get ASP.NET 5 RC option as shown in the following screenshot: When you click on OK in the preceding screen, the following window will appear: When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button: Installation of ASP.NET 5 might take couple of hours and once it is completed you'll get the following screen: During the process of installation of ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so. Project structure in ASP.Net 5 application Once the ASP.NET 5 RC1 is successfully installed, open the Visual Studio and create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot: A new project will be created and the structure will be like the following: File-based project Whenever you add a file or folder in your file system (inside of our ASP.NET 5 project folder), the changes will be automatically reflected in your project structure. Support for full .NET and .NET core You could see a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities—which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET MVC 6 application on a Linux machine will be explained in the book. The Project.json package Usually in an ASP.NET web application, we would be having the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by the name of Project.json, which will contain all the necessary configuration with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager provided by Microsoft, which makes the package installation and uninstallation easier. Prior to NuGet, all the dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs about the frameworks being supported for the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item. Controllers This folder contains all of your controller files. Controllers are responsible for handling the requests and communicating the models and generating the views for the same. Models All of your classes representing the domain data will be present in this folder. Views Views are files which contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files. Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in the book. The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller. Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies which are supported by ASP.NET 5 applications. Summary In this article, you have learnt about the offerings in Visual Studio. Step-by-step instructions are provided for the installation of the Visual Studio Community version—which is freely available for individual developers. We have also discussed the new project structure of the ASP.Net 5 application and the changes when compared to the previous versions. In this book, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how it works. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Debugging Your .NET Application [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 6337

article-image-soa-service-oriented-architecture
Packt
20 Oct 2009
17 min read
Save for later

SOA—Service Oriented Architecture

Packt
20 Oct 2009
17 min read
What is SOA? SOA is the acronym for Service Oriented Architecture. As it has come to be known, SOA is an architectural design pattern by which several guiding principles determine the nature of the design. Basically, SOA states that every component of a system should be a service, and the system should be composed of several loosely-coupled services. A service here means a unit of a program that serves a business process. "Loosely-coupled" here means that these services should be independent of each other, so that changing one of them should not affect any other services. SOA is not a specific technology, nor a specific language. It is just a blueprint, or a system design approach. It is an architecture model that aims to enhance the efficiency, agility, and productivity of an enterprise system. The key concepts of SOA are services, high interoperability and loose coupling. Several other architecture/technologies such as RPC, DCOM, and CORBA have existed for a long time, and attempted to address the client/server communication problems. The difference between SOA and these other approaches is that SOA is trying to address the problem from the client side, and not from the server side. It tries to decouple the client side from the server side, instead of bundling them, to make the client side application much easier to develop and maintain. This is exactly what happened when object-oriented programming (OOP) came into play 20 years ago. Prior to object-oriented programming, most designs were procedure-oriented, meaning the developer had to control the process of an application. Without OOP, in order to finish a block of work, the developer had to be aware of the sequence that the code would follow. This sequence was then hard-coded into the program, and any change to this sequence would result in a code change. With OOP, an object simply supplied certain operations; it was up to the caller of the object to decide the sequence of those operations. The caller could mash up all of the operations, and finish the job in whatever order needed. There was a paradigm shift from the object side to the caller side. This same paradigm shift is happening today. Without SOA, every application is a bundled, tightly coupled solution. The client-side application is often compiled and deployed along with the server-side applications, making it impossible to quickly change anything on the server side. DCOM and CORBA were on the right track to ease this problem by making the server-side components reside on remote machines. The client application could directly call a method on a remote object, without knowing that this object was actually far away, just like calling a method on a local object. However, the client-side applications continue to remain tightly coupled with these remote objects, and any change to the remote object will still result in a recompiling or redeploying of the client application. Now, with SOA, the remote objects are truly treated as remote objects. To the client applications, they are no longer objects; they are services. The client application is unaware of how the service is implemented, or of the signature that should be used when interacting with those services. The client application interacts with these services by exchanging messages. What a client application knows now is only the interfaces, or protocols of the services, such as the format of the messages to be passed in to the service, and the format of the expected returning messages from the service. Historically, there have been many other architectural design approaches, technologies, and methodologies to integrate existing applications. EAI (Enterprise Application Integration) is just one of them. Often, organizations have many different applications, such as order management systems, accounts receivable systems, and customer relationship management systems. Each application has been designed and developed by different people using different tools and technologies at different times, and to serve different purposes. However, between these applications, there are no standard common ways to communicate. EAI is the process of linking these applications and others in order to realize financial and operational competitive advantages. It may seem that SOA is just an extension of EAI. The similarity is that they are both designed to connect different pieces of applications in order to build an enterprise-level system for business. But fundamentally, they are quite different. EAI attempts to connect legacy applications without modifying any of the applications, while SOA is a fresh approach to solve the same problem. Why SOA? So why do we need SOA now? The answer is in one word—agility. Business requirements change frequently, as they always have. The IT department has to respond more quickly and cost-effectively to those changes. With a traditional architecture, all components are bundled together with each other. Thus, even a small change to one component will require a large number of other components to be recompiled and redeployed. Quality assurance (QA) effort is also huge for any code changes. The processes of gathering requirements, designing, development, QA, and deployment are too long for businesses to wait for, and become actual bottlenecks. To complicate matters further, some business processes are no longer static. Requirements change on an ad-hoc basis, and a business needs to be able to dynamically define its own processes whenever it wants. A business needs a system that is agile enough for its day-to-day work. This is very hard, if not impossible, with existing traditional infrastructure and systems. This is where SOA comes into play. SOA's basic unit is a service. These services are building blocks that business users can use to define their own processes. Services are designed and implemented so that they can serve different purposes or processes, and not just specific ones. No matter what new processes a business needs to build or what existing processes a business needs need to modify, the business users should always be able to use existing service blocks, in order to compete with others according to current marketing conditions. Also, if necessary, some new service blocks can be used. These services are also designed and implemented so that they are loosely coupled, and independent of one another. A change to one service does not affect any other service. Also, the deployment of a new service does not affect any existing service. This greatly eases release management and makes agility possible. For example, a GetBalance service can be designed to retrieve the balance for a loan. When a borrower calls in to query the status of a specific loan, this GetBalance service may be called by the application that is used by the customer service representatives. When a borrower makes a payment online, this service can also be called to get the balance of the loan, so that the borrower will know the balance of his or her loan after the payment. Yet in the payment posting process, this service can still be used to calculate the accrued interest for a loan, by multiplying the balance with the interest rate. Even further, a new process can be created by business users to utilize this service if a loan balance needs to be retrieved. The GetBalance service is developed and deployed independently from all of the above processes. Actually, the service exists without even knowing who the client will be or even how many clients there will be. All of the client applications communicate with this service through its interface, and its interface will remain stable once it is in production. If we have to change the implementation of this service, for example by fixing a bug, or changing an algorithm inside a method of the service, all of the client applications can still work without any change. When combined with the more mature Business Process Management (BPM) technology, SOA plays an even more important role in an organization's efforts to achieve agility. Business users can create and maintain processes within BPM, and through SOA they can plug a service into any of the processes. The front-end BPM application is loosely coupled to the back-end SOA system. This combination of BPM and SOA will give an organization much greater flexibility in order to achieve agility. How do we implement SOA? Now that we've established why SOA is needed by the business, the question becomes—how do we implement SOA? To implement SOA in an organization, three key elements have to be evaluated—people, process, and technology. Firstly, the people in the organization must be ready to adopt SOA. Secondly, the organization must know the processes that the SOA approach will include, including the definition, scope, and priority. Finally, the organization should choose the right technology to implement it. Note that people and processes take precedence over technology in an SOA implementation, but they are out of the scope of this article. In this article, we will assume people and processes are all ready for an organization to adopt SOA. Technically, there are many SOA approaches. At certain degrees, traditional technologies such as RPC, DCOM, CORBA, or some modern technologies such as IBM WebSphere MQ, Java RMI, and .NET Remoting could all be categorized as service-oriented, and can be used to implement SOA for one organization. However, all of these technologies have limitations, such as language or platform specifications, complexity of implementation, or the ability to support binary transports only. The most important shortcoming of these approaches is that the server-side applications are tightly coupled with the client-side applications, which is against the SOA principle. Today, with the emergence of web service technologies, SOA becomes a reality. Thanks to the dramatic increase in network bandwidth, and given the maturity of web service standards such as WS-Security, and WS-AtomicTransaction, an SOA back-end can now be implemented as a real system. SOA from different users' perspectives However, as we said earlier, SOA is not a technology, but only a style of architecture, or an approach to building software products. Different people view SOA in different ways. In fact, many companies now have their own definitions for SOA. Many companies claim they can offer an SOA solution, while they are really just trying to sell their products. The key point here is—SOA is not a solution. SOA alone can't solve any problem. It has to be implemented with a specific approach to become a real solution. You can't buy an SOA solution. You may be able to buy some kinds of products to help you realize your own SOA, but this SOA should be customized to your specific environment, for your specific needs. Even within the same organization, different players will think about SOA in quite different ways. What follows are just some examples of how different players in an organization judge the success of an SOA initiative using different criteria. [Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446] To a programmer, SOA is a form of distributed computing in which the building blocks (services) may come from other applications or be offered to them. SOA increases the scope of a programmer's product and adds to his or her resources, while also closely resembling familiar modular software design principles. To a software architect, SOA translates to the disappearance of fences between applications. Architects turn to the design of business functions rather than to self-contained and isolated applications. The software architect becomes interested in collaboration with a business analyst to get a clear picture of the business functionality and scope of the application. SOA turns software architects into integration architects and business experts. For the Chief Investment Officers (CIOs), SOA is an investment in the future. Expensive in the short term, its long-term promises are lower costs, and greater flexibility in meeting new business requirements. Re-use is the primary benefit anticipated as a means to reduce the cost and time of new application development. For business analysts, SOA is the bridge between them and the IT organization. It carries the promise that IT designers will understand them better, because the services in SOA reflect the business functions in business process models. For CEOs, SOA is expected to help IT become more responsive to business needs and facilitate competitive business change. Complexities in SOA implementation Although SOA will make it possible for business parties to achieve agility, SOA itself is technically not simple to implement. In some cases, it even makes software development more complex than ever, because with SOA you are building for unknown problems. On one hand, you have to make sure that the SOA blocks you are building are useful blocks. On the other, you need a framework within which you can assemble those blocks to perform business activities. The technology issues associated with SOA are more challenging than vendors would like users to believe. Web services technology has turned SOA into an affordable proposition for most large organizations by providing a universally-accepted, standard foundation. However, web services play a technology role only for the SOA backplane, which is the software infrastructure that enables SOA-related interoperability and integration. The following figure shows the technical complexity of SOA. It has been taken from Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446. As Gartner says, users must understand the complex world of middleware, and point-to-point web service connections only for small-scale, experimental SOA projects. If the number of services deployed grows to more than 20 or 30, then use a middleware-based intermediary—the SOA backplane. The SOA backplane could be an Enterprise Service Bus (ESB), a Message-Oriented Middleware (MOM), or an Object Request Broker (ORB). However, in this article, we will not cover it. We will build only point-to-point services using WCF. Web services There are many approaches to realizing SOA, but the most popular and practical one is—using web services. What is a web service? A web service is a software system designed to support interoperable machine-to-machine interaction over a network. A web service is typically hosted on a remote machine (provider), and called by a client application (consumer) over a network. After the provider of a web service publishes the service, the client can discover it and invoke it. The communications between a web service and a client application use XML messages. A web service is hosted within a web server and HTTP is used as the transport protocol between the server and the client applications. The following diagram shows the interaction of web services: Web services were invented to solve the interoperability problem between applications. In the early 90s, along with the LAN/WAN/Internet development, it became a big problem to integrate different applications. An application might have been developed using C++, or Java, and run on a Unix box, a Windows PC, or even a mainframe computer. There was no easy way for it to communicate with other applications. It was the development of XML that made it possible to share data between applications across hardware boundaries and networks, or even over the Internet. For example, a Windows application might need to display the price of a particular stock. With a web service, this application can make a request to a URL, and/or pass an XML string such as <QuoteRequest><GetPrice Symble='XYZ'/></QuoteRequest>. The requested URL is actually the Internet address of a web service, which, upon receiving the above quote request, gives a response, <QuoteResponse><QuotePrice Symble='XYZ'>51.22</QuotePrice></QuoteResponse/>. The Windows application then uses an XML parser to interpret the response package, and display the price on the screen. The reason it is called a web service is that it is designed to be hosted in a web server, such as Microsoft Internet Information Server, and called over the Internet, typically via the HTTP or HTTPS protocols. This is to ensure that a web service can be called by any application, using any programming language, and under any operating system, as long as there is an active Internet connection, and of course, an open HTTP/HTTPS port, which is true for almost every computer on the Internet. Each web service has a unique URL, and contains various methods. When calling a web service, you have to specify which method you want to call, and pass the required parameters to the web service method. Each web service method will also give a response package to tell the caller the execution results. Besides new applications being developed specifically as web services, legacy applications can also be wrapped up and exposed as web services. So, an IBM mainframe accounting system might be able to provide external customers with a link to check the balance of an account. Web service WSDL In order to be called by other applications, each web service has to supply a description of itself, so that other applications will know how to call it. This description is provided in a language called a WSDL. WSDL stands for Web Services Description Language. It is an XML format that defines and describes the functionalities of the web service, including the method names, parameter names, and types, and returning data types of the web service. For a Microsoft ASMX web service, you can get the WSDL by adding ?WSDL to the end of the web service URL, say http://localhost/MyService/MyService.asmx?WSDL. Web service proxy A client application calls a web service through a proxy. A web service proxy is a stub class between a web service and a client. It is normally auto-generated by a tool such as Visual Studio IDE, according to the WSDL of the web service. It can be re-used by any client application. The proxy contains stub methods mimicking all of methods of the web service so that a client application can call each method of the web service through these stub methods. It also contains other necessary information required by the client to call the web service such as custom exceptions, custom data and class types, and so on. The address of the web service can be embedded within the proxy class, or it can be placed inside a configuration file. A proxy class is always for a specific language. For each web service, there could be a proxy class for Java clients, a proxy class for C# clients, and yet another proxy class for COBOL clients. To call a web service from a client application, the proper proxy class first has to be added to the client project. Then, with an optional configuration file, the address of the web service can be defined. Within the client application, a web service object can be instantiated, and its methods can be called just as for any other normal method. SOAP There are many standards for web services. SOAP is one of them. SOAP was originally an acronym for Simple Object Access Protocol, and was designed by Microsoft. As this protocol became popular with the spread of web services, and its original meaning was misleading, the original acronym was dropped with version 1.2 of the standard. It is now merely a protocol, maintained by W3C. SOAP, now, is a protocol for exchanging XML-based messages over computer networks. It is widely-used by web services and has become its de-facto protocol. With SOAP, the client application can send a request in XML format to a server application, and the server application will send back a response in XML format. The transport for SOAP is normally HTTP / HTTPS, and the wide acceptance of HTTP is one of the reasons why SOAP is widely accepted today.
Read more
  • 0
  • 0
  • 6328