Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-how-to-migrate-power-bi-datasets-to-microsoft-analysis-services-models
Pravin Dhandre
29 Jun 2018
5 min read
Save for later

How to migrate Power BI datasets to Microsoft Analysis Services models [Tutorial]

Pravin Dhandre
29 Jun 2018
5 min read
The Azure Analysis Services web designer, supports the ability to import a data model contained within a Power BI Desktop file. The imported or migrated model can then take advantage of the resources available to the Azure Analysis Services server and can be accessed from client tools such as Power BI Desktop. Additionally, Azure Analysis Services provides a Visual Studio project file and a Model.bim file for the migrated model that a corporate BI team can use in SSDT for Visual Studio. In this tutorial, you will learn how to migrate your Power BI data to Microsoft Analysis Services for further self-service BI solutions and delivering flexibility to a huge network of stakeholders. This article is an excerpt from a book written by Brett Powell titled Mastering Microsoft Power BI. The following process migrates the model within a Power BI Desktop file to an Azure Analysis Server and downloads the Visual Studio project file for the migrated model: Open the Web designer from the Overview page of the Azure Analysis Services resource in the Azure portal On the Models form, click Add and then provide a name for the new model in the New model form Select the Power BI Desktop File source icon at the bottom and choose the file on the Import menu Click Import to begin the migration process The following screenshot represents these four steps from the Azure Analysis Services web designer: In this example, a Power BI Desktop file (AdWorks Enterprise.pbix) that contains an import mode model based on two on-premises sources (SQL Server and Excel) is imported via the Azure Analysis Services web designer. Once the import is complete, the Field list from the model will be exposed on the right and the imported model will be accessible from client tools like any other Azure Analysis Services model. For example, refreshing the Azure AS server in SQL Server Management Studio will expose the new database (AdWorks Enterprise). Likewise, the Azure Analysis Services database connection in Power BI Desktop (Get Data | Azure) can be used to connect to the migrated model, as shown in the following screenshot: Just like the SQL Server Analysis Services database connection (Get Data | Database), the only required field is the name of the server which is provided in the Azure portal. From the Overview page of the Azure Analysis Services resource, select the Open in Visual Studio project option from the context menu on the far right, as shown in the following screenshot: Save the zip file provided by Azure Analysis Services to a secure local network location. Extract the files from the zip file to expose the Analysis Services project and .bim file, as shown in the following screenshot: In Visual Studio, open a project/solution (File | Open | Project/Solution) and navigate to the downloaded project file (.smproj). Select the project file and click Open. Double-click the Model.bim file in the Solution Explorer window to expose the metadata of the migrated model. All of the objects of the data model built into the Power BI Desktop file including Data Sources, Queries, and Measures are accessible in SSDT just like standard Analysis Services projects, as shown in the following screenshot: The preceding screenshot from Diagram view in SQL Server Data Tools exposes the two on-premises sources of the imported PBIX file via the Tabular Model Explorer window. By default, the deployment server of the Analysis Services project in SSDT is set to the Azure Analysis Services server. As an alternative to a new solution with a single project, an existing solution with an existing Analysis Services project could be opened and the new project from the migration could be added to this solution. This can be accomplished by right-clicking the existing solution's name in the Solution Explorer window and selecting the Existing project from the Add menu (Add | Existing project). This approach allows the corporate BI developer to view and compare both models and optionally implement incremental changes, such as new columns or measures that were exclusive to the Power BI Desktop file. The following screenshot from a solution in Visual Studio includes both the migrated model (via the project file) and an existing Analysis Services model (AdWorks Import): The ability to quickly migrate Power BI datasets to Analysis Services models complements the flexibility and scale of Power BI Premium capacity in allowing organizations to manage and deploy Power BI on their terms. By now, you have successfully migrated your Power BI datasets to Analysis Services and can enjoy the complete flexibility of making further edits to your model for mining much better insights out of it. If you found this tutorial useful, do check out the book Mastering Microsoft Power BI and start producing insightful and beautiful reports from hundreds of data sources and scale across the enterprise. How to use M functions within Microsoft Power BI for querying data Building a Microsoft Power BI Data Model How to build a live interactive visual dashboard in Power BI with Azure Stream
Read more
  • 0
  • 1
  • 9638

article-image-tensorflow-models-mobile-embedded-devices
Savia Lobo
15 May 2018
12 min read
Save for later

How to Build TensorFlow Models for Mobile and Embedded devices

Savia Lobo
15 May 2018
12 min read
TensorFlow models can be used in applications running on mobile and embedded platforms. TensorFlow Lite and TensorFlow Mobile are two flavors of TensorFlow for resource-constrained mobile devices. TensorFlow Lite supports a subset of the functionality compared to TensorFlow Mobile. It results in better performance due to smaller binary size with fewer dependencies. The article covers topics for training a model to integrate TensorFlow into an application. The model can then be saved and used for inference and prediction in the mobile application. [box type="note" align="" class="" width=""]This article is an excerpt from the book Mastering TensorFlow 1.x written by Armando Fandango. This book will help you leverage the power of TensorFlow and Keras to build deep learning models, using concepts such as transfer learning, generative adversarial networks, and deep reinforcement learning.[/box] To learn how to use TensorFlow models on mobile devices, following topics are covered: TensorFlow on mobile platforms TF Mobile in Android apps TF Mobile demo on Android TF Mobile demo on iOS TensorFlow Lite TF Lite demo on Android TF Lite demo on iOS TensorFlow on mobile platforms TensorFlow can be integrated into mobile apps for many use cases that involve one or more of the following machine learning tasks: Speech recognition Image recognition Gesture recognition Optical character recognition Image or text classification Image, text, or speech synthesis Object identification To run TensorFlow on mobile apps, we need two major ingredients: A trained and saved model that can be used for predictions A TensorFlow binary that can receive the inputs, apply the model, produce the predictions, and send the predictions as output The high-level architecture looks like the following figure: The mobile application code sends the inputs to the TensorFlow binary, which uses the trained model to compute predictions and send the predictions back. TF Mobile in Android apps The TensorFlow ecosystem enables it to be used in Android apps through the interface class  TensorFlowInferenceInterface, and the TensorFlow Java API in the jar file libandroid_tensorflow_inference_java.jar. You can either use the jar file from the JCenter, download a precompiled jar from ci.tensorflow.org, or build it yourself. The inference interface has been made available as a JCenter package and can be included in the Android project by adding the following code to the build.gradle file: allprojects  { repositories  { jcenter() } } dependencies  { compile  'org.tensorflow:tensorflow-android:+' } Note : Instead of using the pre-built binaries from the JCenter, you can also build them yourself using Bazel or Cmake by following the instructions at this link: https://github.com/tensorflow/tensorflow/blob/r1.4/ tensorflow/contrib/android/README.md Once the TF library is configured in your Android project, you can call the TF model with the following four steps:  Load the model: TensorFlowInferenceInterface  inferenceInterface  = new  TensorFlowInferenceInterface(assetManager,  modelFilename);  Send the input data to the TensorFlow binary: inferenceInterface.feed(inputName, floatValues,  1,  inputSize,  inputSize,  3);  Run the prediction or inference: inferenceInterface.run(outputNames,  logStats);  Receive the output from the TensorFlow binary: inferenceInterface.fetch(outputName,  outputs); TF Mobile demo on Android In this section, we shall learn about recreating the Android demo app provided by the TensorFlow team in their official repo. The Android demo will install the following four apps on your Android device: TF  Classify: This is an object identification app that identifies the images in the input from the device camera and classifies them in one of the pre-defined classes. It does not learn new types of pictures but tries to classify them into one of the categories that it has already learned. The app is built using the inception model pre-trained by Google. TF  Detect: This is an object detection app that detects multiple objects in the input from the device camera. It continues to identify the objects as you move the camera around in continuous picture feed mode. TF  Stylize: This is a style transfer app that transfers one of the selected predefined styles to the input from the device camera. TF  Speech: This is a speech recognition app that identifies your speech and if it matches one of the predefined commands in the app, then it highlights that specific command on the device screen. Note: The sample demo only works for Android devices with an API level greater than 21 and the device must have a modern camera that supports FOCUS_MODE_CONTINUOUS_PICTURE. If your device camera does not have this feature supported, then you have to add the path submitted to TensorFlow by the author: https://github.com/ tensorflow/tensorflow/pull/15489/files. The easiest way to build and deploy the demo app on your device is using Android Studio. To build it this way, follow these steps:  Install Android Studio. We installed Android Studio on Ubuntu 16.04 from the instructions at the following link: https://developer.android.com/studio/ install.html  Check out the TensorFlow repository, and apply the patch mentioned in the previous tip. Let's assume you checked out the code in the tensorflow folder in your home directory.  Using Android Studio, open the Android project in the path ~/tensorflow/tensorflow/examples/Android.     Your screen will look similar to this:  Expand the Gradle Scripts option from the left bar and then open the  build.gradle file.  In the build.gradle file, locate the def  nativeBuildSystem definition and set it to 'none'. In the version of  the code we checked out, this definition is at line 43: def  nativeBuildSystem  =  'none'  Build the demo and run it on either a real or simulated device. We tested the app on these devices: 7.  You can also build the apk and install the apk file on the virtual or actual connected device. Once the app installs on the device, you will see the four apps we discussed earlier: You can also build the whole demo app from the source using Bazel or Cmake by following the instructions at this link: https://github.com/tensorflow/tensorflow/tree/r1.4/tensorflow/examples/android TF Mobile in iOS apps TensorFlow enables support for iOS apps by following these steps:  Include TF Mobile in your app by adding a file named Profile in the root directory of your project. Add the following content to the Profile: target  'Name-Of-Your-Project' pod  'TensorFlow-experimental'  Run the pod  install command to download and install the TensorFlow Experimental pod.  Run the myproject.xcworkspace command to open the workspace so you can add the      prediction code to your application logic. Note: To create your own TensorFlow binaries for iOS projects, follow the instructions at this link: https://github.com/tensorflow/tensorflow/ tree/master/tensorflow/examples/ios Once the TF library is configured in your iOS project, you can call the TF model with the following four steps:  Load the model: PortableReadFileToProto(file_path,  &tensorflow_graph);  Create a session: tensorflow::Status  s  =  session->Create(tensorflow_graph);  Run the prediction or inference and get the outputs: std::string  input_layer  =  "input"; std::string  output_layer  =  "output"; std::vector<tensorflow::Tensor>  outputs; tensorflow::Status  run_status  =  session->Run( {{input_layer,  image_tensor}}, {output_layer},  {},  &outputs);  Fetch the output data: tensorflow::Tensor*  output  =  &outputs[0]; TF Mobile demo on iOS In order to build the demo on iOS, you need Xcode 7.3 or later. Follow these steps to build the iOS demo apps:  Check out the TensorFlow code in a tensorflow folder in your home directory.  Open a terminal window and execute the following commands from your home folder to download the Inception V1 model, extract the label and graph files, and move these files into the data folders inside the sample app code: $ mkdir -p ~/Downloads $ curl -o ~/Downloads/inception5h.zip https://storage.googleapis.com/download.tensorflow.org/models/incep tion5h.zip && unzip ~/Downloads/inception5h.zip -d ~/Downloads/inception5h $ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/benchmark/data/ $ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/camera/data/ $ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/simple/data/  Navigate to one of the sample folders and download the experimental pod: $ cd ~/tensorflow/tensorflow/examples/ios/camera $ pod install  Open the Xcode workspace: $ open tf_simple_example.xcworkspace  Run the sample app in the device simulator. The sample app will appear with a Run Model button. The camera app requires an Apple device to be connected, while the other two can run in a simulator too. TensorFlow Lite TF Lite is the new kid on the block and still in the developer view at the time of writing this book. TF Lite is a very small subset of TensorFlow Mobile and TensorFlow, so the binaries compiled with TF Lite are very small in size and deliver superior performance. Apart from reducing the size of binaries, TensorFlow employs various other techniques, such as: The kernels are optimized for various device and mobile architectures The values used in the computations are quantized The activation functions are pre-fused It leverages specialized machine learning software or hardware available on the device, such as the Android NN API The workflow for using the models in TF Lite is as follows:  Get the model: You can train your own model or pick a pre-trained model available from different sources, and use the pre-trained as is or retrain it with your own data, or retrain after modifying some parts of the model. As long as you have a trained model in the file with an extension .pb or .pbtxt, you are good to proceed to the next step. We learned how to save the models in the previous chapters.  Checkpoint the model: The model file only contains the structure of the graph, so you need to save the checkpoint file. The checkpoint file contains the serialized variables of the model, such as weights and biases. We learned how to save a checkpoint in the previous chapters.  Freeze the model: The checkpoint and the model files are merged, also known as freezing the graph. TensorFlow provides the freeze_graph tool for this step, which can be executed as follows: $ freeze_graph --input_graph=mymodel.pb --input_checkpoint=mycheckpoint.ckpt --input_binary=true --output_graph=frozen_model.pb --output_node_name=mymodel_nodes  Convert the model: The frozen model from step 3 needs to be converted to TF Lite format with the toco tool provided by TensorFlow: $ toco --input_file=frozen_model.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_type=FLOAT --input_arrays=input_nodes --output_arrays=mymodel_nodes --input_shapes=n,h,w,c  The .tflite model saved in step 4 can now be used inside an Android or iOS app that employs the TFLite binary for inference. The process of including the TFLite binary in your app is continuously evolving, so we recommend the reader follows the information at this link to include the TFLite binary in your Android or iOS app: https://github.com/tensorflow/tensorflow/tree/master/ tensorflow/contrib/lite/g3doc Generally, you would use the graph_transforms:summarize_graph tool to prune the model obtained in step 1. The pruned model will only have the paths that lead from input to output at the time of inference or prediction. Any other nodes and paths that are required only for training or for debugging purposes, such as saving checkpoints, are removed, thus making the size of the final model very small. The official TensorFlow repository comes with a TF Lite demo that uses a pre-trained mobilenet to classify the input from the device camera in the 1001 categories. The demo app displays the probabilities of the top three categories. TF Lite Demo on Android To build a TF Lite demo on Android, follow these steps: Install Android Studio. We installed Android Studio on Ubuntu 16.04 from the instructions at the following link: https://developer.android.com/studio/ install.html Check out the TensorFlow repository, and apply the patch mentioned in the previous tip. Let's assume you checked out the code in the tensorflow folder in your home directory. Using Android Studio, open the Android project from the path ~/tensorflow/tensorflow/contrib/lite/java/demo. If it complains about a missing SDK or Gradle components, please install those components and sync Gradle. Build the project and run it on a virtual device with API > 21. We received the following warnings, but the build succeeded. You may want to resolve the warnings if the build fails: Warning:The  Jack  toolchain  is  deprecated  and  will  not run.  To  enable  support  for  Java  8 language  features  built into  the  plugin,  remove  'jackOptions  {  ...  }'  from  your build.gradle  file, and  add android.compileOptions.sourceCompatibility  1.8 android.compileOptions.targetCompatibility  1.8 Note:  Future  versions  of  the  plugin  will  not  support  usage 'jackOptions'  in  build.gradle. To learn  more,  go  to https://d.android.com/r/tools/java-8-support-message.html Warning:The  specified  Android  SDK  Build  Tools  version (26.0.1)  is  ignored,  as  it  is  below  the minimum  supported version  (26.0.2)  for  Android  Gradle  Plugin  3.0.1. Android  SDK  Build  Tools 26.0.2  will  be  used. To  suppress  this  warning,  remove  "buildToolsVersion '26.0.1'"  from  your  build.gradle  file,  as  each  version  of the  Android  Gradle  Plugin  now  has  a  default  version  of the  build  tools. TF Lite demo on iOS In order to build the demo on iOS, you need Xcode 7.3 or later. Follow these steps to build the iOS demo apps:  Check out the TensorFlow code in a  tensorflow folder in your home directory.  Build the TF Lite binary for iOS from the instructions at this link: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite  Navigate to the sample folder and download the pod: $ cd ~/tensorflow/tensorflow/contrib/lite/examples/ios/camera $ pod install  Open the Xcode workspace: $ open tflite_camera_example.xcworkspace  Run the sample app in the device simulator. We learned about using TensorFlow models on mobile applications and devices. TensorFlow provides two ways to run on mobile devices: TF Mobile and TF Lite. We learned how to build TF Mobile and TF Lite apps for iOs and Android. We used TensorFlow demo apps as an example.   If you found this post useful, do check out the book Mastering TensorFlow 1.x  to skill up for building smarter, faster, and efficient machine learning and deep learning systems. The 5 biggest announcements from TensorFlow Developer Summit 2018 Getting started with Q-learning using TensorFlow Implement Long-short Term Memory (LSTM) with TensorFlow  
Read more
  • 0
  • 0
  • 9604

article-image-mailing-spring-mail
Packt
04 Jun 2015
19 min read
Save for later

Mailing with Spring Mail

Packt
04 Jun 2015
19 min read
In this article, by Anjana Mankale, author of the book Mastering Spring Application Development we shall see how we can use the Spring mail template to e-mail recipients. We shall also demonstrate using Spring mailing template configurations using different scenarios. (For more resources related to this topic, see here.) Spring mail message handling process The following diagram depicts the flow of a Spring mail message process. With this, we can clearly understand the process of sending mail using a Spring mailing template. A message is created and sent to the transport protocol, which interacts with internet protocols. Then, the message is received by the recipients. The Spring mail framework requires a mail configuration, or SMTP configuration, as the input and message that needs to be sent. The mail API interacts with internet protocols to send messages. In the next section, we shall look at the classes and interfaces in the Spring mail framework. Interfaces and classes used for sending mails with Spring The package org.springframework.mail is used for mail configuration in the spring application. The following are the three main interfaces that are used for sending mail: MailSender: This interface is used to send simple mail messages. JavaMailSender: This interface is a subinterface of the MailSender interface and supports sending mail messages. MimeMessagePreparator: This interface is a callback interface that supports the JavaMailSender interface in the preparation of mail messages. The following classes are used for sending mails using Spring: SimpleMailMessage: This is a class which has properties such as to, from, cc, bcc, sentDate, and many others. The SimpleMailMessage interface sends mail with MailSenderImp classes. JavaMailSenderImpl: This class is an implementation class of the JavaMailSender interface. MimeMessageHelper: This class helps with preparing MIME messages. Sending mail using the @Configuration annotation We shall demonstrate here how we can send mail using the Spring mail API. First, we provide all the SMTP details in the .properties file and read it to the class file with the @Configuration annotation. The name of the class is MailConfiguration. mail.properties file contents are shown below: mail.protocol=smtp mail.host=localhost mail.port=25 mail.smtp.auth=false mail.smtp.starttls.enable=false mail.from=me@localhost mail.username= mail.password=   @Configuration @PropertySource("classpath:mail.properties") public class MailConfiguration { @Value("${mail.protocol}") private String protocol; @Value("${mail.host}") private String host; @Value("${mail.port}") private int port; @Value("${mail.smtp.auth}") private boolean auth; @Value("${mail.smtp.starttls.enable}") private boolean starttls; @Value("${mail.from}") private String from; @Value("${mail.username}") private String username; @Value("${mail.password}") private String password;   @Bean public JavaMailSender javaMailSender() {    JavaMailSenderImpl mailSender = new JavaMailSenderImpl();    Properties mailProperties = new Properties();    mailProperties.put("mail.smtp.auth", auth);    mailProperties.put("mail.smtp.starttls.enable", starttls);    mailSender.setJavaMailProperties(mailProperties);    mailSender.setHost(host);    mailSender.setPort(port);    mailSender.setProtocol(protocol);    mailSender.setUsername(username);    mailSender.setPassword(password);    return mailSender; } } The next step is to create a rest controller to send mail; to do so, click on Submit. We shall use the SimpleMailMessage interface since we don't have any attachment. @RestController class MailSendingController { private final JavaMailSender javaMailSender; @Autowired MailSubmissionController(JavaMailSender javaMailSender) {    this.javaMailSender = javaMailSender; } @RequestMapping("/mail") @ResponseStatus(HttpStatus.CREATED) SimpleMailMessage send() {    SimpleMailMessage mailMessage = new SimpleMailMessage();    mailMessage.setTo("packt@localhost");    mailMessage.setReplyTo("anjana@localhost");    mailMessage.setFrom("Sonali@localhost");    mailMessage.setSubject("Vani veena Pani");  mailMessage.setText("MuthuLakshmi how are you?Call      Me Please [...]");    javaMailSender.send(mailMessage);    return mailMessage; } } Sending mail using MailSender and Simple Mail Message with XML configuration "Simple mail message" means the e-mail sent will only be text-based with no HTML formatting, no images, and no attachments. In this section, consider a scenario where we are sending a welcome mail to the user as soon as the user gets their order placed in the application. In this scenario, the mail will be sent after the database insertion operation is successful. Create a separate folder, called com.packt.mailService, for the mail service. The following are the steps for sending mail using the MailSender interface and SimpleMailMessage class. Create a new Maven web project with the name Spring4MongoDB_MailChapter3. We have also used the same Eshop db database with MongoDB for CRUD operations on Customer, Order, and Product. We have also used the same mvc configurations and source files. Use the same dependencies as used previously. We need to add dependencies to the pom.xml file: <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-mail</artifactId> <version>3.0.2.RELEASE</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1-rev-1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.3</version> </dependency> Compile the Maven project. Create a separate folder called com.packt.mailService for the mail service. Create a simple class named MailSenderService and autowire the MailSender and SimpleMailMessage classes. The basic skeleton is shown here: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String    subject, String body){    /*Code */ }   } Next, create an object of SimpleMailMessage and set mail properties, such as from, to, and subject to it. public void sendmail(String from, String to, String subject, String body){ SimpleMailMessage message=new SimpleMailMessage(); message.setFrom(from); message.setSubject(subject); message.setText(body); mailSender.send(message); } We need to configure the SMTP details. Spring Mail Support provides this flexibility of configuring SMTP details in the XML file. <bean id="mailSender" class="org.springframework.mail.javamail. JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com" /> <property name="port" value="587" /> <property name="username" value="username" /> <property name="password" value="password" />   <property name="javaMailProperties"> <props>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop> </props> </property> </bean>   <bean id="mailSenderService" class=" com.packt.mailserviceMailSenderService "> <property name="mailSender" ref="mailSender" /> </bean>   </beans> We need to send mail to the customer after the order has been placed successfully in the MongoDB database. Update the addorder() method as follows: @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order")    Order order,Map<String, Object> model) {    Customer cust=new Customer();    cust=customer_respository.getObject      (order.getCustomer().getCust_id());      order.setCustomer(cust);    order.setProduct(product_respository.getObject      (order.getProduct().getProdid()));    respository.saveObject(order);    mailSenderService.sendmail      ("[email protected]",cust.getEmail(),      "Dear"+cust.getName()+"Your order      details",order.getProduct().getName()+"-price-"+order      .getProduct().getPrice());    model.put("customerList", customerList);    model.put("productList", productList);    return "order"; } Sending mail to multiple recipients If you want to intimate the user regarding the latest products or promotions in the application, you can create a mail sending group and send mail to multiple recipients using Spring mail sending support. We have created an overloaded method in the same class, MailSenderService, which will accept string arrays. The code snippet in the class will look like this: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String subject,    String body){    /*Code */ }   public void sendmail(String from, String []to, String subject,    String body){    /*Code */ }   } The following is the code snippet for listing the set of users from MongoDB who have subscribed to promotional e-mails: public List<Customer> getAllObjectsby_emailsubscription(String    status) {    return mongoTemplate.find(query(      where("email_subscribe").is("yes")), Customer.class); } Sending MIME messages Multipurpose Internet Mail Extension (MIME) allows attachments to be sent over the Internet. This class just demonstrates how we can send mail with MIME messages. Using a MIME message sender type class is not advisible if you are not sending any attachments with the mail message. In the next section, we will look at the details of how we can send mail with attachments. Update the MailSenderService class with another method. We have used the MIME message preparator and have overridden the prepare method() to set properties for the mail. public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage;   public void sendmail(String from, String to, String subject,    String body){    /*Code */ } public void sendmail(String from, String []to, String subject,    String body){    /*Code */ } public void sendmime_mail(final String from, final String to,    final String subject, final String body) throws MailException{    MimeMessagePreparator message = new MimeMessagePreparator() {      public void prepare(MimeMessage mimeMessage)        throws Exception {        mimeMessage.setRecipient(Message.RecipientType.TO,new          InternetAddress(to));        mimeMessage.setFrom(new InternetAddress(from));        mimeMessage.setSubject(subject);        mimeMessage.setText(msg);    } }; mailSender.send(message); } Sending attachments with mail We can also attach various kinds of files to the mail. This functionality is supported by the MimeMessageHelper class. If you just want to send a MIME message without an attachment, you can opt for MimeMesagePreparator. If the requirement is to have an attachment to be sent with the mail, we can go for the MimeMessageHelper class with file APIs. Spring provides a file class named org.springframework.core.io.FileSystemResource, which has a parameterized constructor that accepts file objects. public class SendMailwithAttachment { public static void main(String[] args)    throws MessagingException {    AnnotationConfigApplicationContext ctx =      new AnnotationConfigApplicationContext();    ctx.register(AppConfig.class);    ctx.refresh();    JavaMailSenderImpl mailSender =      ctx.getBean(JavaMailSenderImpl.class);    MimeMessage mimeMessage = mailSender.createMimeMessage();    //Pass true flag for multipart message    MimeMessageHelper mailMsg = new MimeMessageHelper(mimeMessage,      true);    mailMsg.setFrom("[email protected]");    mailMsg.setTo("[email protected]");    mailMsg.setSubject("Test mail with Attachment");    mailMsg.setText("Please find Attachment.");    //FileSystemResource object for Attachment    FileSystemResource file = new FileSystemResource(new      File("D:/cp/ GODGOD. jpg"));    mailMsg.addAttachment("GODGOD.jpg", file);    mailSender.send(mimeMessage);    System.out.println("---Done---"); }   } Sending preconfigured mail In this example, we shall provide a message that is to be sent in the mail, and we will configure it in an XML file. Sometimes when it comes to web applications, you may have to send messages on maintenance. Think of a scenario where the content of the mail changes, but the sender and receiver are preconfigured. In such a case, you can add another overloaded method to the MailSender class. We have fixed the subject of the mail, and the content can be sent by the user. Think of it as "an application which sends mails to users whenever the build fails". <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/ context/spring-context-3.0.xsd"> <context:component-scan base-package="com.packt" /> <!-- SET default mail properties --> <bean id="mailSender" class= "org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com"/> <property name="port" value="25"/> <property name="username" value="[email protected]"/> <property name="password" value="password"/> <property name="javaMailProperties"> <props>    <prop key="mail.transport.protocol">smtp</prop>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop>    <prop key="mail.debug">true</prop> </props> </property> </bean>   <!-- You can have some pre-configured messagess also which are ready to send --> <bean id="preConfiguredMessage" class= "org.springframework.mail.SimpleMailMessage"> <property name="to" value="[email protected]"></property> <property name="from" value="[email protected]"></property> <property name="subject" value="FATAL ERROR- APPLICATION AUTO    MAINTENANCE STARTED-BUILD FAILED!!"/> </bean> </beans> Now we shall sent two different bodies for the subjects. public class MyMailer { public static void main(String[] args){    try{      //Create the application context      ApplicationContext context = new        FileSystemXmlApplicationContext(        "application-context.xml");        //Get the mailer instance      ApplicationMailer mailer = (ApplicationMailer)        context.getBean("mailService");      //Send a composed mail      mailer.sendMail("[email protected]", "Test Subject",        "Testing body");    }catch(Exception e){      //Send a pre-configured mail      mailer.sendPreConfiguredMail("build failed exception occured        check console or logs"+e.getMessage());    } } } Using Spring templates with Velocity to send HTML mails Velocity is the templating language provided by Apache. It can be integrated into the Spring view layer easily. The latest Velocity version used during this book is 1.7. In the previous section, we demonstrated using Velocity to send e-mails using the @Bean and @Configuration annotations. In this section, we shall see how we can configure Velocity to send mails using XML configuration. All that needs to be done is to add the following bean definition to the .xml file. In the case of mvc, you can add it to the dispatcher-servlet.xml file. <bean id="velocityEngine" class= "org.springframework.ui.velocity.VelocityEngineFactoryBean"> <property name="velocityProperties"> <value>    resource.loader=class    class.resource.loader.class=org.apache.velocity    .runtime.resource.loader.ClasspathResourceLoader </value> </property> </bean> Create a new Maven web project with the name Spring4MongoDB_Mail_VelocityChapter3. Create a package and name it com.packt.velocity.templates. Create a file with the name orderconfirmation.vm. <html> <body> <h3> Dear Customer,<h3> <p>${customer.firstName} ${customer.lastName}</p> <p>We have dispatched your order at address.</p> ${Customer.address} </body> </html> Use all the dependencies that we have added in the previous sections. To the existing Maven project, add this dependency: <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> To ensure that Velocity gets loaded on application startup, we shall create a class. Let's name the class VelocityConfiguration.java. We have used the annotations @Configuration and @Bean with the class. import java.io.IOException; import java.util.Properties;   import org.apache.velocity.app.VelocityEngine; import org.apache.velocity.exception.VelocityException; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.ui.velocity.VelocityEngineFactory; @Configuration public class VelocityConfiguration { @Bean public VelocityEngine getVelocityEngine() throws VelocityException, IOException{    VelocityEngineFactory velocityEngineFactory = new      VelocityEngineFactory();    Properties props = new Properties();    props.put("resource.loader", "class");    props.put("class.resource.loader.class",      "org.apache.velocity.runtime.resource.loader." +      "ClasspathResourceLoader");    velocityEngineFactory.setVelocityProperties(props);    return factory.createVelocityEngine(); } } Use the same MailSenderService class and add another overloaded sendMail() method in the class. public void sendmail(final Customer customer){ MimeMessagePreparator preparator = new    MimeMessagePreparator() {    public void prepare(MimeMessage mimeMessage)    throws Exception {      MimeMessageHelper message =        new MimeMessageHelper(mimeMessage);      message.setTo(user.getEmailAddress());      message.setFrom("[email protected]"); // could be        parameterized      Map model = new HashMap();      model.put("customer", customer);      String text =        VelocityEngineUtils.mergeTemplateIntoString(        velocityEngine, "com/packt/velocity/templates/        orderconfirmation.vm", model);      message.setText(text, true);    } }; this.mailSender.send(preparator); } Update the controller class to send mail using the Velocity template. @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order") Order order,Map<String, Object> model) { Customer cust=new Customer(); cust=customer_respository.getObject(order.getCustomer()    .getCust_id());   order.setCustomer(cust); order.setProduct(product_respository.getObject    (order.getProduct().getProdid())); respository.saveObject(order); // to send mail using velocity template. mailSenderService.sendmail(cust);   return "order"; } Sending Spring mail over a different thread There are other options for sending Spring mail asynchronously. One way is to have a separate thread to the mail sending job. Spring comes with the taskExecutor package, which offers us a thread pooling functionality. Create a class called MailSenderAsyncService that implements the MailSender interface. Import the org.springframework.core.task.TaskExecutor package. Create a private class called MailRunnable. Here is the complete code for MailSenderAsyncService: public class MailSenderAsyncService implements MailSender{ @Resource(name = "mailSender") private MailSender mailSender;   private TaskExecutor taskExecutor;   @Autowired public MailSenderAsyncService(TaskExecutor taskExecutor){    this.taskExecutor = taskExecutor; } public void send(SimpleMailMessage simpleMessage) throws    MailException {    taskExecutor.execute(new MailRunnable(simpleMessage)); }   public void send(SimpleMailMessage[] simpleMessages)    throws MailException {    for (SimpleMailMessage message : simpleMessages) {      send(message);    } }   private class SimpleMailMessageRunnable implements    Runnable {    private SimpleMailMessage simpleMailMessage;    private SimpleMailMessageRunnable(SimpleMailMessage      simpleMailMessage) {      this.simpleMailMessage = simpleMailMessage;    }      public void run() {    mailSender.send(simpleMailMessage);    } } private class SimpleMailMessagesRunnable implements    Runnable {    private SimpleMailMessage[] simpleMessages;    private SimpleMailMessagesRunnable(SimpleMailMessage[]      simpleMessages) {      this.simpleMessages = simpleMessages;    }      public void run() {      mailSender.send(simpleMessages);    } } } Configure the ThreadPool executor in the .xml file. <bean id="taskExecutor" class="org.springframework. scheduling.concurrent.ThreadPoolTaskExecutor" p_corePoolSize="5" p_maxPoolSize="10" p_queueCapacity="100"    p_waitForTasksToCompleteOnShutdown="true"/> Test the source code. import javax.annotation.Resource;   import org.springframework.mail.MailSender; import org.springframework.mail.SimpleMailMessage; import org.springframework.test.context.ContextConfiguration;   @ContextConfiguration public class MailSenderAsyncService { @Resource(name = " mailSender ") private MailSender mailSender; public void testSendMails() throws Exception {    SimpleMailMessage[] mailMessages = new      SimpleMailMessage[5];      for (int i = 0; i < mailMessages.length; i++) {      SimpleMailMessage message = new SimpleMailMessage();      message.setSubject(String.valueOf(i));      mailMessages[i] = message;    }    mailSender.send(mailMessages); } public static void main (String args[]){    MailSenderAsyncService asyncservice=new      MailSenderAsyncService();    Asyncservice. testSendMails(); } } Sending Spring mail with AOP We can also send mails by integrating the mailing functionality with Aspect Oriented Programming (AOP). This can be used to send mails after the user registers with an application. Think of a scenario where the user receives an activation mail after registration. This can also be used to send information about an order placed on an application. Use the following steps to create a MailAdvice class using AOP: Create a package called com.packt.aop. Create a class called MailAdvice. public class MailAdvice { public void advice (final ProceedingJoinPoint    proceedingJoinPoint) {    new Thread(new Runnable() {    public void run() {      System.out.println("proceedingJoinPoint:"+        proceedingJoinPoint);      try {        proceedingJoinPoint.proceed();      } catch (Throwable t) {        // All we can do is log the error.         System.out.println(t);      }    } }).start(); } } This class creates a new thread and starts it. In the run method, the proceedingJoinPoint.proceed() method is called. ProceddingJoinPoint is a class available in AspectJ.jar. Update the dispatcher-servlet.xml file with aop configurations. Update the xlmns namespace using the following code: advice"> <aop:around method="fork"    pointcut="execution(* org.springframework.mail    .javamail.JavaMailSenderImpl.send(..))"/> </aop:aspect> </aop:config> Summary In this article, we demonstrated how to create a mailing service and configure it using Spring API. We also demonstrated how to send mails with attachments using MIME messages. We also demonstrated how to create a dedicated thread for sending mails using ExecutorService. We saw an example in which mail can be sent to multiple recipients, and saw an implementation of using the Velocity engine to create templates and send mails to recipients. In the last section, we demonstrated how the Spring framework supported mails can be sent using Spring AOP and threads. Resources for Article: Further resources on this subject: Time Travelling with Spring [article] Welcome to the Spring Framework [article] Creating a Spring Application [article]
Read more
  • 0
  • 0
  • 9578

article-image-how-to-scaffold-a-new-module-in-odoo-11
Sugandha Lahoti
25 May 2018
2 min read
Save for later

How to Scaffold a New module in Odoo 11

Sugandha Lahoti
25 May 2018
2 min read
The latest version of Odoo ERP, Odoo 11, brings a plethora of features to Odoo targeting business application development. The market for Odoo is growing enormously and if you have thought about developing in Odoo, now is the best time to start. This hands-on video course, Odoo 11 Development Essentials, by Riste Kabranov, will help you get started with Odoo to build powerful applications. What is Scaffolding? With Scaffolding, you can automatically create a skeleton structure to simplify bootstrapping of new modules in Odoo. Since it’s an automatic process, you don’t need to spend efforts in setting up basic structures and look for starting requirements. Oddo has a scaffold command that creates the skeleton for a new module based on a template. By default, the new module is created in the current working directory, but we can provide a specific directory where to create the module, passing it as an additional parameter. A step-by-step guide to scaffold a new module in Odoo 11: Step 1 In the first step, you need to navigate to /opt/odoo/odoo and create a folder name custom_addons. Step 2 In the second step, you scaffold a new module into the custom_addons. For this, Locate odoo-bin Use ./odoo-bin scaffold module_name folder_name to scaffold a new empty module Check if the new module is there and consists all the files needed. Check out the video for a more detailed walkthrough! This video tutorial has been taken from Odoo 11 Development Essentials. To learn how to build and customize business applications with Odoo, buy the full video course. ERP tool in focus: Odoo 11 Building Your First Odoo Application Top 5 free Business Intelligence tools
Read more
  • 0
  • 0
  • 9528

article-image-intelligent-mobile-projects-with-tensorflow-build-a-basic-raspberry-pi-robot-that-listens-moves-sees-and-speaks-tutorial
Bhagyashree R
27 Aug 2018
14 min read
Save for later

Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial]

Bhagyashree R
27 Aug 2018
14 min read
According to Wikipedia, "The Raspberry Pi is a series of small single-board computers developed in the United Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in schools and in developing countries." The official site of Raspberry Pi describes it as "a small and affordable computer that you can use to learn programming." If you have never heard of or used Raspberry Pi before, just go its website and chances are you'll quickly fall in love with the cool little thing. Little yet powerful—in fact, developers of TensorFlow made TensorFlow available on Raspberry Pi from early versions around mid-2016, so we can run complicated TensorFlow models on the tiny computer that you can buy for about $35. In this article we will see how to set up TensorFlow on Raspberry Pi and use the TensorFlow image recognition and audio recognition models, along with text to speech and robot movement APIs, to build a Raspberry Pi robot that can move, see, listen, and speak. This tutorial is an excerpt from a book written by Jeff Tang titled Intelligent Mobile Projects with TensorFlow. Setting up TensorFlow on Raspberry Pi To use TensorFlow in Python, we can install the TensorFlow 1.6 nightly build for Pi at the TensorFlow Jenkins continuous integrate site (http://ci.tensorflow.org/view/Nightly/job/nightly-pi/223/artifact/output-artifacts): sudo pip install http://ci.tensorflow.org/view/Nightly/job/nightly-pi/lastSuccessfulBuild/artifact/output-artifacts/tensorflow-1.6.0-cp27-none-any.whl This method is quite common. A more complicated method is to use the makefile, required when you need to build and use the TensorFlow library. The Raspberry Pi section of the official TensorFlow makefile documentation (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile) has detailed steps to build the TensorFlow library, but it may not work with every release of TensorFlow. The steps there work perfectly with an earlier version of TensorFlow (0.10), but would cause many "undefined reference to google::protobuf" errors with the TensorFlow 1.6. The following steps have been tested with the TensorFlow 1.6 release, downloadable at https://github.com/tensorflow/tensorflow/releases/tag/v1.6.0; you can certainly try a newer version in the TensorFlow releases page, or clone the latest TensorFlow source by git clone https://github.com/tensorflow/tensorflow, and fix any possible hiccups. After cd to your TensorFlow source root, we run the following commands: tensorflow/contrib/makefile/download_dependencies.sh sudo apt-get install -y autoconf automake libtool gcc-4.8 g++-4.8 cd tensorflow/contrib/makefile/downloads/protobuf/ ./autogen.sh ./configure make CXX=g++-4.8 sudo make install sudo ldconfig # refresh shared library cache cd ../../../../.. export HOST_NSYNC_LIB=`tensorflow/contrib/makefile/compile_nsync.sh` export TARGET_NSYNC_LIB="$HOST_NSYNC_LIB" Make sure you run make CXX=g++-4.8, instead of just make, as documented in the official TensorFlow Makefile documentation, because Protobuf must be compiled with the same gcc version as that used for building the following TensorFlow library, in order to fix those "undefined reference to google::protobuf" errors. Now try to build the TensorFlow library using the following command: make -f tensorflow/contrib/makefile/Makefile HOST_OS=PI TARGET=PI \ OPTFLAGS="-Os -mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize" CXX=g++-4.8 After a few hours of building, you'll likely get an error such as "virtual memory exhausted: Cannot allocate memory" or the Pi board will just freeze due to running out of memory. To fix this, we need to set up a swap, because without the swap, when an application runs out of the memory, the application will get killed due to a kernel panic. There are two ways to set up a swap: swap file and swap partition. Raspbian uses a default swap file of 100 MB on the SD card, as shown here using the free command: pi@raspberrypi:~/tensorflow-1.6.0 $ free -h total used free shared buff/cache available Mem: 927M 45M 843M 660K 38M 838M Swap: 99M 74M 25M To improve the swap file size to 1 GB, modify the /etc/dphys-swapfile file via sudo vi /etc/dphys-swapfile, changing CONF_SWAPSIZE=100 to CONF_SWAPSIZE=1024, then restart the swap file service: sudo /etc/init.d/dphys-swapfile stop sudo /etc/init.d/dphys-swapfile start After this, free -h will show the Swap total to be 1.0 GB. A swap partition is created on a separate USB disk and is preferred because a swap partition can't get fragmented but a swap file on the SD card can get fragmented easily, causing slower access. To set up a swap partition, plug a USB stick with no data you need on it to the Pi board, then run sudo blkid, and you'll see something like this: /dev/sda1: LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="622fddad-da3c-4a09-b6b3-11233a2ca1f6" /dev/sda2: UUID="E67F-6EAB" TYPE="vfat" PARTLABEL="NO NAME" PARTUUID="a045107a-9e7f-47c7-9a4b-7400d8d40f8c" /dev/sda2 is the partition we'll use as the swap partition. Now unmount and format it to be a swap partition: sudo umount /dev/sda2 sudo mkswap /dev/sda2 mkswap: /dev/sda2: warning: wiping old swap signature. Setting up swapspace version 1, size = 29.5 GiB (31671701504 bytes) no label, UUID=23443cde-9483-4ed7-b151-0e6899eba9de You'll see a UUID output in the mkswap command; run sudo vi /etc/fstab, add a line as follows to the fstab file with the UUID value: UUID=<UUID value> none swap sw,pri=5 0 0 Save and exit the fstab file and then run sudo swapon -a. Now if you run free -h again, you'll see the Swap total to be close to the USB storage size. We definitely don't need all that size for swap—in fact, the recommended maximum swap size for the Raspberry Pi 3 board with 1 GB memory is 2 GB, but we'll leave it as is because we just want to successfully build the TensorFlow library. With either of the swap setting changes, we can rerun the make command: make -f tensorflow/contrib/makefile/Makefile HOST_OS=PI TARGET=PI \ OPTFLAGS="-Os -mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize" CXX=g++-4.8 After this completes, the TensorFlow library will be generated as tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a. Now we can build the image classification example using the library. Image recognition and text to speech There are two TensorFlow Raspberry Pi example apps (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/pi_examples) located in tensorflow/contrib/pi_examples: label_image and camera. We'll modify the camera example app to integrate text to speech so the app can speak out its recognized images when moving around. Before we build and test the two apps, we need to install some libraries and download the pre-built TensorFlow Inception model file: sudo apt-get install -y libjpeg-dev sudo apt-get install libv4l-dev curl https://storage.googleapis.com/download.tensorflow.org/models/inception_dec_2015_stripped.zip -o /tmp/inception_dec_2015_stripped.zip cd ~/tensorflow-1.6.0 unzip /tmp/inception_dec_2015_stripped.zip -d tensorflow/contrib/pi_examples/label_image/data/ To build the label_image and camera apps, run: make -f tensorflow/contrib/pi_examples/label_image/Makefile make -f tensorflow/contrib/pi_examples/camera/Makefile You may encounter the following error when building the apps: ./tensorflow/core/platform/default/mutex.h:25:22: fatal error: nsync_cv.h: No such file or directory #include "nsync_cv.h" ^ compilation terminated. To fix this, run sudo cp tensorflow/contrib/makefile/downloads/nsync/public/nsync*.h /usr/include. Then edit the tensorflow/contrib/pi_examples/label_image/Makefile or  tensorflow/contrib/pi_examples/camera/Makefile file, add the following library, and include paths before running the make command again: -L$(DOWNLOADSDIR)/nsync/builds/default.linux.c++11 \ -lnsync \ To test run the two apps, run the apps directly: tensorflow/contrib/pi_examples/label_image/gen/bin/label_image tensorflow/contrib/pi_examples/camera/gen/bin/camera Take a look at the C++ source code,  tensorflow/contrib/pi_examples/label_image/label_image.cc and tensorflow/contrib/pi_examples/camera/camera.cc, and you'll see they use the similar C++ code as in our iOS apps in the previous chapters to load the model graph file, prepare input tensor, run the model, and get the output tensor. By default, the camera example also uses the prebuilt Inception model unzipped in the label_image/data folder. But for your own specific image classification task, you can provide your own model retrained via transfer learning using the --graph parameter when running the two example apps. In general, voice is a Raspberry Pi robot's main UI to interact with us. Ideally, we should run a TensorFlow-powered natural-sounding Text-to-Speech (TTS) model such as WaveNet (https://deepmind.com/blog/wavenet-generative-model-raw-audio) or Tacotron (https://github.com/keithito/tacotron), but it'd be beyond the scope of this article to run and deploy such a model. It turns out that we can use a much simpler TTS library called Flite by CMU (http://www.festvox.org/flite), which offers pretty decent TTS, and it takes just one simple command to install it: sudo apt-get install flite. If you want to install the latest version of Flite to hopefully get a better TTS quality, just download the latest Flite source from the link and build it. To test Flite with our USB speaker, run flite with the -t parameter followed by a double quoted text string such as  flite -t "i recommend the ATM machine". If you don't like the default voice, you can find other supported voices by running flite -lv, which should return Voices available: kal awb_time kal16 awb rms slt. Then you can specify a voice used for TTS: flite -voice rms -t "i recommend the ATM machine". To let the camera app speak out the recognized objects, which should be the desired behavior when the Raspberry Pi robot moves around, you can use this simple pipe command: tensorflow/contrib/pi_examples/camera/gen/bin/camera | xargs -n 1 flite -t You'll likely hear too much voice. To fine tune the TTS result of image classification, you can also modify the camera.cc file and add the following code to the PrintTopLabels function before rebuilding the example using make -f tensorflow/contrib/pi_examples/camera/Makefile: std::string cmd = "flite -voice rms -t \""; cmd.append(labels[label_index]); cmd.append("\""); system(cmd.c_str()); Now that we have completed the image classification and speech synthesis tasks, without using any Cloud APIs, let's see how we can do audio recognition on Raspberry Pi. Audio recognition and robot movement To use the pre-trained audio recognition model in the TensorFlow tutorial (https://www.tensorflow.org/tutorials/audio_recognition), we'll reuse a listen.py Python script from https://gist.github.com/aallan, and add the GoPiGo API calls to control the robot movement after it recognizes four basic audio commands: "left," "right," "go," and "stop." The other six commands supported by the pre-trained model—"yes," "no," "up," "down," "on," and "off"—don't apply well in our example. To run the script, first download the pre-trained audio recognition model from http://download.tensorflow.org/models/speech_commands_v0.01.zip and unzip it to /tmp for example, to the Pi board's /tmp directory, then run: python listen.py --graph /tmp/conv_actions_frozen.pb --labels /tmp/conv_actions_labels.txt -I plughw:1,0 Or you can run: python listen.py --graph /tmp/speech_commands_graph.pb --labels /tmp/conv_actions_labels.txt -I plughw:1,0 Note that plughw value 1,0 should match the card number and device number of your USB microphone, which can be found using the arecord -l command we showed before. The listen.py script also supports many other parameters. For example, we can use --detection_threshold 0.5 instead of the default detection threshold 0.8. Let's now take a quick look at how listen.py works before we add the GoPiGo API calls to make the robot move. listen.py uses Python's subprocess module and its Popen class to spawn a new process of running the arecord command with appropriate parameters. The Popen class has an stdout attribute that specifies the arecord executed command's standard output file handle, which can be used to read the recorded audio bytes. The Python code to load the trained model graph is as follows: with tf.gfile.FastGFile(filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') A TensorFlow session is created using tf.Session() and after the graph is loaded and session created, the recorded audio buffer gets sent, along with the sample rate, as the input data to the TensorFlow session's run method, which returns the prediction of the recognition: run(softmax_tensor, { self.input_samples_name_: input_data, self.input_rate_name_: self.sample_rate_ }) Here, softmax_tensor is defined as the TensorFlow graph's get_tensor_by_name(self.output_name_), and output_name_,  input_samples_name_, and input_rate_name_ are defined as  labels_softmax, decoded_sample_data:0, decoded_sample_data:1, respectively. On Raspberry Pi, you can choose to run the TensorFlow models on Pi using the TensorFlow Python API directly, or C++ API (as in the label_image and camera examples), although normally you'd still train the models on a more powerful computer. For the complete TensorFlow Python API documentation, see https://www.tensorflow.org/api_docs/python. To use the GoPiGo Python API to make the robot move based on your voice command, first add the following two lines to listen.py: import easygopigo3 as gpg gpg3_obj = gpg.EasyGoPiGo3() Then add the following code to the end of the def add_data method: if current_top_score > self.detection_threshold_ and time_since_last_top > self.suppression_ms_: self.previous_top_label_ = current_top_label self.previous_top_label_time_ = current_time_ms is_new_command = True logger.info(current_top_label) if current_top_label=="go": gpg3_obj.drive_cm(10, False) elif current_top_label=="left": gpg3_obj.turn_degrees(-30, False) elif current_top_label=="right": gpg3_obj.turn_degrees(30, False) elif current_top_label=="stop": gpg3_obj.stop() Now put your Raspberry Pi robot on the ground, connect to it with ssh from your computer, and run the following script: python listen.py --graph /tmp/conv_actions_frozen.pb --labels /tmp/conv_actions_labels.txt -I plughw:1,0 --detection_threshold 0.5 You'll see output like this: INFO:audio:started recording INFO:audio:_silence_ INFO:audio:_silence_ Then you can say left, right, stop, go, and stop to see the commands get recognized and the robot moves accordingly: INFO:audio:left INFO:audio:_silence_ INFO:audio:_silence_ INFO:audio:right INFO:audio:_silence_ INFO:audio:stop INFO:audio:_silence_ INFO:audio:go INFO:audio:stop You can run the camera app in a separate Terminal, so while the robot moves around based on your voice commands, it'll recognize new images it sees and speak out the results. That's all it takes to build a basic Raspberry Pi robot that listens, moves, sees, and speaks—what the Google I/O 2016 demo does but without using any Cloud APIs. It's far from a fancy robot that can understand natural human speech, engage in interesting conversations, or perform useful and non-trivial tasks. But powered with pre-trained, retrained, or other powerful TensorFlow models, and using all kinds of sensors, you can certainly add more and more intelligence and physical power to the Pi robot we have built. Google TensorFlow is used to train all the models deployed and running on mobile devices. This book covers 10 projects on the implementation of all major AI areas on iOS, Android, and Raspberry Pi: computer vision, speech and language processing, and machine learning, including traditional, reinforcement, and deep reinforcement. If you liked this tutorial and would like to implement projects for major AI areas on iOS, Android, and Raspberry Pi, check out the book Intelligent Mobile Projects with TensorFlow. TensorFlow 2.0 is coming. Here’s what we can expect. Build and train an RNN chatbot using TensorFlow [Tutorial] Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial]
Read more
  • 0
  • 0
  • 9512

article-image-python-3-building-wiki-application
Packt
19 May 2011
17 min read
Save for later

Python 3: Building a Wiki Application

Packt
19 May 2011
17 min read
Python 3 Web Development Beginner's Guide Nowadays, a wiki is a well-known tool to enable people to maintain a body of knowledge in a cooperative way. Wikipedia (http://wikipedia.org) might be the most famous example of a wiki today, but countless numbers of forums use some sort of wiki and many tools and libraries exist to implement a wiki application. In this article, we will develop a wiki of our own, and in doing so, we will focus on two important concepts in building web applications. The first one is the design of the data layer. The second one is input validation. A wiki is normally a very public application that might not even employ a basic authentication scheme to identify users. This makes contributing to a wiki very simple, yet also makes a wiki vulnerable in the sense that anyone can put anything on a wiki page. It's therefore a good idea to verify the content of any submitted change. You may, for example, strip out any HTML markup or disallow external links. Enhancing user interactions in a meaningful way is often closely related with input validation. Client-side input validation helps prevent the user from entering unwanted input and is therefore a valuable addition to any application but is not a substitute for server-side input validation as we cannot trust the outside world not to try and access our server in unintended ways. The data layer A wiki consists of quite a number of distinct entities we can indentify. We will implement these entities and the relations that exist between them by reusing the Entity/Relation framework developed earlier. Time for action – designing the wiki data model As with any application, when we start developing our wiki application we must first take a few steps to create a data model that can act as a starting point for the development: Identify each entity that plays a role in the application. This might depend on the requirements. For example, because we want the user to be able to change the title of a topic and we want to archive revisions of the content, we define separate Topic and Page entities. Identify direct relations between entities. Our decision to define separate Topic and Page entities imply a relation between them, but there are more relations that can be identified, for example, between Topic and Tag. Do not specify indirect relations: All topics marked with the same tag are in a sense related, but in general, it is not necessary to record these indirect relations as they can easily be inferred from the recorded relation between topics and tags. The image shows the different entities and relations we can identify in our wiki application. In the diagram, we have illustrated the fact that a Topic may have more than one Page while a Page refers to a single User in a rather informal way by representing Page as a stack of rectangles and User as a single rectangle. In this manner, we can grasp the most relevant aspects of the relations at a glance. When we want to show more relations or relations with different characteristics, it might be a good idea to use more formal methods and tools. A good starting point is the Wikipedia entry on UML: http://en.wikipedia.org/wiki/Unified_Modelling_Language. What just happened? With the entities and relations in our data model identified, we can have a look at their specific qualities. The basic entity in a wiki is a Topic. A topic, in this context, is basically a title that describes what this topic is about. A topic has any number of associated Pages. Each instance of a Page represents a revision; the most recent revision is the current version of a topic. Each time a topic is edited, a new revision is stored in the database. This way, we can simply revert to an earlier version if we made a mistake or compare the contents of two revisions. To simplify identifying revisions, each revision has a modification date. We also maintain a relation between the Page and the User that modified that Page. In the wiki application that we will develop, it is also possible to associate any number of tags with a topic. A Tag entity consists simply of a tag attribute. The important part is the relation that exists between the Topic entity and the Tag entity. Like a Tag, a Word entity consists of a single attribute. Again, the important bit is the relation, this time, between a Topic and any number of Words. We will maintain this relation to reflect the words used in the current versions (that is, the last revision of a Page) of a Topic. This will allow for fairly responsive full text search facilities. The final entity we encounter is the Image entity. We will use this to store images alongside the pages with text. We do not define any relation between topics and images. Images might be referred to in the text of the topic, but besides this textual reference, we do not maintain a formal relation. If we would like to maintain such a relation, we would be forced to scan for image references each time a new revision of a page was stored, and probably we would need to signal something if a reference attempt was made to a non-existing image. In this case, we choose to ignore this: references to images that do not exist in the database will simply show nothing: Chapter6/wikidb.py from entity import Entity from relation import Relation class User(Entity): pass class Topic(Entity): pass class Page(Entity): pass class Tag(Entity): pass class Word(Entity): pass class Image(Entity): pass class UserPage(Relation): pass class TopicPage(Relation): pass class TopicTag(Relation): pass class ImagePage(Relation): pass class TopicWord(Relation): pass def threadinit(db): User.threadinit(db) Topic.threadinit(db) Page.threadinit(db) Tag.threadinit(db) Word.threadinit(db) Image.threadinit(db) UserPage.threadinit(db) TopicPage.threadinit(db) TopicTag.threadinit(db) ImagePage.threadinit(db) TopicWord.threadinit(db) def inittable(): User.inittable(userid="unique not null") Topic.inittable(title="unique not null") Page.inittable(content="", modified="not null default CURRENT_TIMESTAMP") Tag.inittable(tag="unique not null") Word.inittable(word="unique not null") Image.inittable(type="",data="blob",title="", modified="not null default CURRENT_TIMESTAMP", description="") UserPage.inittable(User,Page) TopicPage.inittable(Topic,Page) TopicTag.inittable(Topic,Tag) TopicWord.inittable(Topic,Word) Because we can reuse the entity and relation modules we developed earlier, the actual implementation of the database layer is straightforward (full code is available as wikidb.py). After importing both modules, we first define a subclass of Entity for each entity we identified in our data model. All these classes are used as is, so they have only a pass statement as their body. Likewise, we define a subclass of Relation for each relation we need to implement in our wiki application. All these Entity and Relation subclasses still need the initialization code to be called once each time the application starts and that is where the convenience function initdb() comes in. It bundles the initialization code for each entity and relation (highlighted). Many entities we define here are simple but a few warrant a closer inspection. The Page entity contains a modified column that has a non null constraint. It also has a default: CURRENT_TIMESTAMP (highlighted). This default is SQLite specific (other database engines will have other ways of specifying such a default) and will initialize the modified column to the current date and time if we create a new Page record without explicitly setting a value. The Image entity also has a definition that is a little bit different: its data column is explicitly defined to have a blob affinity. This will enable us to store binary data without any problem in this table, something we need to store and retrieve the binary data contained in an image. Of course, SQLite will happily store anything we pass it in this column, but if we pass it an array of bytes (not a string that is), that array is stored as is. The delivery layer With the foundation, that is, the data layer in place, we build on it when we develop the delivery layer. Between the delivery layer and the database layer, there is an additional layer that encapsulates the domain-specific knowledge (that is, it knows how to verify that the title of a new Topic entity conforms to the requirements we set for it before it stores it in the database): Each different layer in our application is implemented in its own file or files. It is easy to get confused, so before we delve further into these files, have a look at the following table. It lists the different files that together make up the wiki application and refers to the names of the layers. We'll focus on the main CherryPy application first to get a feel for the behavior of the application. Time for action – implementing the opening screen The opening screen of the wiki application shows a list of all defined topics on the right and several ways to locate topics on the left. Note that it still looks quite rough because, at this point, we haven't applied any style sheets: Let us first take a few steps to identify the underlying structure. This structure is what we would like to represent in the HTML markup: Identify related pieces of information that are grouped together. These form the backbone of a structured web page. In this case, the search features on the left form a group of elements distinct from the list of topics on the right. Identify distinct pieces of functionality within these larger groups. For example, the elements (input field and search button) that together make up the word search are such a piece of functionality, as are the tag search and the tag cloud. Try to identify any hidden functionality, that is, necessary pieces of information that will have to be part of the HTML markup, but are not directly visible on a page. In our case, we have links to the jQuery and JQuery UI JavaScript libraries and links to CSS style sheets. Identifying these distinct pieces will not only help to put together HTML markup that reflects the structure of a page, but also help to identify necessary functionality in the delivery layer because each of these functional pieces is concerned with specific information processed and produced by the server. What just happened? Let us look in somewhat more detail at the structure of the opening page that we identified. Most notable are three search input fields to locate topics based on words occurring in their bodies, based on their actual title or based on tags associated with a topic. These search fields feature auto complete functionality that allows for comma-separated lists. In the same column, there is also room for a tag cloud, an alphabetical list of tags with font sizes dependent on the number of topics marked with that tag. The structural components The HTML markup for this opening page is shown next. It is available as the file basepage.html and the contents of this file are served by several methods in the Wiki class implementing the delivery layer, each with a suitable content segment. Also, some of the content will be filled in by AJAX calls, as we will see in a moment: Chapter6/basepage.html <html> <head> <title>Wiki</title> <script src= "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"> </script> <script src= "http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.3/jquery-ui.min.js" type="text/javascript"> </script> <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/ jqueryui/1.8.3/themes/smoothness/jquery-ui.css" type="text/css" media="all" /> <link rel="stylesheet" href="/wiki.css" type="text/css" media="all" /> </head> <body> <div id="navigation"> <div class="navitem"> <a href="./">Wiki Home</a> </div> <div class="navitem"> <span class="label">Search topic</span> <form id="topicsearch"> <input type="text" > <button type="submit" >Search</button> </form> </div> <div class="navitem"> <span class="label">Search word</span> <form id="wordsearch"> <input type="text" > <button type="submit" >Search</button> </form> </div> <div class="navitem"> <span class="label">Search tag</span> <form id="tagsearch"> <input type="text" > <button type="submit" >Search</button> </form> </div> <div class="navitem"> <p id="tagcloud">Tag cloud</p> </div> </div> <div id="content">%s</div> <script src="/wikiweb.js" type="text/javascript"></script> </body> </html> The <head> element contains both links to CSS style sheets and <script> elements that refer to the jQuery libraries. This time, we choose again to retrieve these libraries from a public content delivery network. The highlighted lines show the top-level <div> elements that define the structure of the page. In this case, we have identified a navigation part and a content part and this is reflected in the HTML markup. Enclosed in the navigation part are the search functions, each in their own <div> element. The content part contains just an interpolation placeholder %s for now, that will be filled in by the method that serves this markup. Just before the end of the body of the markup is a final <script> element that refers to a JavaScript file that will perform actions specific to our application and we will examine those later. The application methods The markup from the previous section is served by methods of the Wiki class, an instance of which class can be mounted as a CherryPy application. The index() method, for example, is where we produce the markup for the opening screen (the complete file is available as wikiweb.py and contains several other methods that we will examine in the following sections): Chapter6/wikiweb.py @cherrypy.expose def index(self): item = '<li><a href="show?topic=%s">%s</a></li>' topiclist = "n".join( [item%(t,t)for t in wiki.gettopiclist()]) content = '<div id="wikihome"><ul>%s</ul></div>'%( topiclist,) return basepage % content First, we define the markup for every topic we will display in the main area of the opening page (highlighted). The markup consists of a list item that contains an anchor element that refers to a URL relative to the page showing the opening screen. Using relative URLs allows us to mount the class that implements this part of the application anywhere in the tree that serves the CherryPy application. The show() method that will serve this URL takes a topic parameter whose value is interpolated in the next line for each topic that is present in the database. The result is joined to a single string that is interpolated into yet another string that encapsulates all the list items we just generated in an unordered list (a <ul> element in the markup) and this is finally returned as the interpolated content of the basepage variable. In the definition of the index() method, we see a pattern that will be repeated often in the wiki application: methods in the delivery layer, like index(), concern themselves with constructing and serving markup to the client and delegate the actual retrieval of information to a module that knows all about the wiki itself. Here the list of topics is produced by the wiki.gettopiclist() function, while index() converts this information to markup. Separation of these activities helps to keep the code readable and therefore maintainable. Time for action – implementing a wiki topic screen When we request a URL of the form show?topic=value, this will result in calling the show() method. If value equals an existing topic, the following (as yet unstyled) screen is the result: Just as for the opening screen, we take steps to: Identify the main areas on screen Identify specific functionality Identify any hidden functionality The page structure is very similar to the opening screen, with the same navigational items, but instead of a list of topics, we see the content of the requested topic together with some additional information like the tags associated with this subject and a button that may be clicked to edit the contents of this topic. After all, collaboratively editing content is what a Wiki is all about. We deliberately made the choice not to refresh the contents of just a part of the opening screen with an AJAX call, but opted instead for a simple link that replaces the whole page. This way, there will be an unambiguous URL in the address bar of the browser that will point at the topic. This allows for easy bookmarking. An AJAX call would have left the URL of the opening screen that is visible in the address bar of the browser unaltered and although there are ways to alleviate this problem, we settle for this simple solution here. What just happened? As the main structure we identified is almost identical to the one for the opening page, the show() method will reuse the markup in basepage.html. Chapter6/wikiweb.py @cherrypy.expose def show(self,topic): topic = topic.capitalize() currentcontent,tags = wiki.gettopic(topic) currentcontent = "".join(wiki.render(currentcontent)) tags = ['<li><a href="searchtags?tags=%s">%s</a></li>'%( t,t) for t in tags] content = ''' <div> <h1>%s</h1><a href="edit?topic=%s">Edit</a> </div> <div id="wikitopic">%s</div> <div id="wikitags"><ul>%s</ul></div> <div id="revisions">revisions</div> ''' % ( topic, topic, currentcontent,"n".join(tags)) return basepage % content The show() method delegates most of the work to the wiki.gettopic() method (highlighted) that we will examine in the next section and concentrates on creating the markup it will deliver to the client. wiki.gettopic() will return a tuple that consists of both the current content of the topic and a list of tags. Those tags are converted to <li> elements with anchors that point to the searchtags URL. This list of tags provides a simple way for the reader to find related topics with a single click. The searchtags URL takes a tags argument so a single <li> element constructed this way may look like this: <li><a href="searchtags?tags=Python">Python</a></li>. The content and the clickable list of tags are embedded in the markup of the basepage together with an anchor that points to the edit URL. Later, we will style this anchor to look like a button and when the user clicks it, it will present a page where the content may be edited.  
Read more
  • 0
  • 0
  • 9507
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-blender-25-creating-uv-texture
Packt
21 Oct 2010
4 min read
Save for later

Blender 2.5: creating a UV texture

Packt
21 Oct 2010
4 min read
Before we can create a custom UV texture, we need to export our current UV map from Blender to a file that an image manipulation program, such as GIMP or Photoshop, can read. Exporting our UV map If we have GIMP downloaded, we can export our UV map from Blender to a format that GIMP can read. To do this, make sure we can view our UV map in the Image Editor. Then, go to UVs | Export UV Layout. Then save the file in a folder you can easily get to, naming it UV_layout or whatever you like. (Move the mouse over the image to enlarge.) Now it's time to open GIMP! Downloading GIMP Before we begin, we need to first get an image manipulation program. If you don't have one of the high-end programs, such as Photoshop, there still is hope. There's a wonderful free (and open source) program called GIMP, which parallels Photoshop in functionality. For the sake of creating our textures, we will be using GIMP, but feel free to use whatever you are personally most comfortable with. To download GIMP, visit the program's website at http://www.gimp.org and download the right version for your operating system. Mac Users will need to install X11 so GIMP will run. Consult your Mac OS installation guide for instructions on how to install. Windows users, you will need to install the GTK+ Runtime Environment to run GIMP—the download installer should warn you about this during installation. To install GTK+, visit http://www.gtk.org. Hello GIMP! When we open GIMP for the first time, we should have a 3-window layout, similar to the following screen: Create a new document by selecting File | New. You can also use the Ctrl+N keyboard shortcut. This should bring up a dialog box with a list of settings we can use to customize our new document. Because Blender exported our UV map as an SVG file, we can choose any size image we want, because we can scale the image to fit our document. SVG stands for Scalable Vector Graphic. Vector graphics are images defined by mathematically calculated paths, allowing them to be scaled infinitely without the pixilation caused when raster images are enlarged beyond a certain point. Change the Width and Height attributes to 2000 each. This will create a texture image 2000 pixels wide by 2000 pixels high. Click on OK to create our new document. Getting reference images Before we can create a UV texture for our wine bottle, which will primarily define the bottle's label, we need to know what is typically on a wine bottle's label. If you search the web for any wine bottle, you'll get a pretty good idea of what a wine bottle label looks like. However, for our purposes, we're going to use the following image: Notice how there's typically the name of the wine company, the type of wine, and the year it was made. We're going to use all of these in our own wine bottle label. Importing our UV map A nice thing about GIMP is that we can import images as layers into our current file. We're going to do just this with our UV map. Go to File | Open as Layers... to bring up the file selection dialog box. Navigate to the UV map we saved earlier and open it. Another dialog box will pop up—we can use this to tell GIMP how we want our SVG to appear in our document. Change the Width and Height attributes to match our working document—2000px by 2000px. Click on OK to confirm. Not every file type will bring up this dialog box—it's specific to SVG files only. We should now see our UV map in the document as a new layer. Before we continue, we should change the background color of our texture. Our label is going to be white, so we are going to need to distinguish our label from the rest of the wine bottle's material. With our background layer selected, fill the layer with a black color using the Fill tool. Next, we can create the background color of the label. Create a new layer by clicking on the New Layer button. Name it label_background. Using the Marquee Selection tool, make a selection similar to the following image: Fill it, using the Fill tool, with white. This will be the background for our label—everything else we add with be made in relation to this layer. Keep the UV map layer on top as often as possible. This will help us keep a clear view of where our graphics are in relation to our UV map at all times.
Read more
  • 0
  • 0
  • 9500

article-image-visual-mysql-database-design-mysql-workbench
Packt
21 Oct 2009
3 min read
Save for later

Visual MySQL Database Design in MySQL Workbench

Packt
21 Oct 2009
3 min read
MySQL Workbench is a visual database design tool recently released by MySQL AB. The tool is specifically for designing MySQL database. What you build in MySQL Workbench is called physical data model. A physical data model is a data model for a specific RDBMS product; the model in this article will have some MySQL unique specifications. We can generate (forward-engineer) the database objects from its physical model, which in addition to tables and their columns, can also include other objects such as view. MySQL Workbench has many functions and features; this article by Djoni Darmawikarta shows some of them by way of an example. We’ll build a physical data model for an order system where an order can be a sale order or a purchase order; and then, forward-engineer our model into an MySQL database. The physical model of our example in EER diagram will look like in the following MySQL Workbench screenshot. Creating ORDER Schema Let’s first create a schema where we want to store our order physical model. Click the + button (circled in red). Change the new schema’s default name to ORDER. Notice that when you’re typing in the schema name, its tab name on the Physical Schemata also changes accordingly—a nice feature. The order schema is added to the Catalog (I circled the order schema and its objects in red). Close the schema window. Confirm to rename the schema when prompted. Creating Order Tables We’ll now create three tables that model the order: ORDER table and its two subtype tables: SALES_ORDER and PURCHASE_ORDER, in the ORDER schema. First of all, make sure you select the ORDER schema tab, so that the tables we’ll create will be in this schema. We’ll create our tables as EER diagram (EER = Enhanced Entity Relationship). So, double-click the Add Diagram button. Select (click) the Table icon, and then move your mouse onto the EER Diagram canvas and click on the location you want to place the first table. Repeat for the other two tables. You can move around the tables by dragging and dropping. Next, we’ll work on table1, which we’ll do so using the Workbench’s table editor. We start the table editor by right-clicking the table1 and selecting Edit Table. Next, we’ll work on table1, which we’ll do so using the Workbench’s table editor. We start the table editor by right-clicking the table1 and selecting Edit Table. Rename the table by typing in ORDER over table1. We’ll next add its columns, so select the Columns tab. Replace idORDER column name with ORDER_NO. Select INT as the data type from the drop-down list. We’d like this ORDER_NO column to be valued incrementally by MySQL database, so we specify it as AI column (Auto Increment). AI is a specific feature of MySQL database. You can also specify other physical attributes of the table, such as its Collation; as well as other advanced options, such as its trigger and partioning (the Trigger and Partioning tabs). Notice that on the diagram our table1 has changed to ORDER, and it has its first column, ORDER_NO. In the Catalog you can also see the three tables. The black dots on the right of the tables indicate that they’ve been included in an diagram.  
Read more
  • 0
  • 0
  • 9491

article-image-creating-and-managing-user-accounts-microsoft-windows-sbs-2011
Packt
25 Apr 2012
6 min read
Save for later

Creating and managing user accounts in Microsoft Windows SBS 2011

Packt
25 Apr 2012
6 min read
  (For more resources on Microsoft, see here.) A user account or object in a Windows Server domain is a security mechanism that allows a person to access the resources of the network by "logging in" to the network. Doing so with the correct credentials automatically provides the user with the configured rights to network resources such as files, folders, printers, and so on. Most importantly, Windows SBS 2011 Standard is just like any Windows Server in that it leverages the power of Active Directory to manage and maintain these user objects. The major difference that Windows SBS 2011 Standard brings with it is that the majority of these tasks can be accomplished via wizards. Using the wizards not only reduces the time taken to administer a Windows SBS 2011 Standard network, but it also always produces a consistent result. For these reasons alone every Windows SBS 2011 Standard administrator should always use the wizards when administrating their network, especially when working with user accounts.   Creating, editing, and deleting user accounts It is important that you always use the Windows SBS 2011 Standard Console and wizards when you create, edit, or delete any users. The main reason is that the wizards do a number of things behind the scenes to ensure everything works correctly on the Windows SBS 2011 Standard system. Creating users manually via native Active Directory tools may result in features not being enabled. The wizards are there to do all the hard work and create the accounts for you in Active Directory; so don't fear they are doing something different, they aren't. They are there to make an administrators' life easier, so use them every time. This cannot be stressed strongly enough. To create a new user: Run the Windows SBS 2011 Standard Console. Select the Users and Groups icon. Select the Users tab: (Move the mouse over the image to enlarge.) You should now see a list of any existing users and you should also see the option Add a new user account link under the Tasks section to the right. Click this option to create a new user. The Add a New User Account wizard now runs. Enter the details for the user. Also select the role for that user from the drop-down list. When complete, click the Next button to continue. At the next screen you will be prompted to enter the user's password. It is important to note that you cannot progress past this screen until you have entered a password that conforms to both length and complexity requirements. These requirements can be modified in the system if required. Once you have entered a suitable password, the Add user account button will be available. Click this to continue. The wizard will now run and create a network account for the user, create a home folder for that user, an e-mail account, set appropriate quotas, and send a Welcome e-mail to the user's inbox. When complete, click the Finish button: You should now see the user you created appear in the list of Users. To edit an existing user account, simply: Run the Windows SBS 2011 Standard Console. Select the Users and Groups icon. Select the Users tab. Select the user you wish to edit from the list of users that is displayed. From the Tasks list on the right, select Edit user account properties. You shouldnow see all the properties of the user displayed in a window, as shown in the next screenshot. Simply select the desired section from the left and make any changes to the properties on the right. Click the OK button to save the changes and return to the Windows SBS 2011 Standard console: To delete an existing user account: Run the Windows SBS 2011 Standard Console. Select the Users and Groups icon. Select the Users tab. Select the user you wish to delete from the list of users that is displayed. On the right-hand side, under the Tasks pane select Remove user account. You'll be prompted to confirm that you wish to delete the selected account. By default doing so will also remove that user's mailbox and shared folder. If you don't desire this, simply uncheck these options before clicking the Yes button to proceed: The selected account will then be removed from the system and you should receive a confirmation that the process completed successfully. When this is displayed simply click the OK button. This will take you to the Windows SBS 2011 Standard Console and you should notice that the selected user no longer appears in the list.   Assigning permissions to users To assign permissions to an existing user account you will need to edit that account. To do this: Run the Windows SBS 2011 Standard Console. Select the Users and Groups icon. Select the Users tab. Select the user you wish to edit from the list of users that is displayed. From the Tasks list on the right, select Edit user account properties. You should now see all the properties of the user displayed in a window, as shown in the following screenshot. Simply select the desired section from the left and make any changes to the properties on the right: For example, if you wish to change the user's rights to the files on the server, this would normally be done via the Groups option. If you select the Groups option, you will be shown a list of groups that the user belongs to. You can select an existing group and remove it or you can add a group. Adding a user to a group will automatically provide them access to whatever the group has access to. Click the OK button to save the changes and return to the Windows SBS 2011 Standard console. You can also change the user's permissions by changing their role on the network. In this way, you can promote or demote a user to the same level as any pre-configured user role. To make this change: Run the Windows SBS 2011 Standard Console. Select the Users and Groups icon. Select the Users tab. Select the user you wish to edit from the list of users that is displayed. From the Tasks list on the right, select Change user role for user accounts. The wizard will then prompt you to select which role you wish that user to assume, as previously shown. You can also elect to Replace user permissions or settings or Add user permissions or settings: You will then be asked to select one or more users from a list of users whose role you wish to change. Once the selection process is complete click the Change user role button. The wizard will now run and when complete you will be provided with a status window as to the success of the process. Click the Finish button to complete the process. The user will now have either the same permissions as the role you selected, or the merged permission of the user role and the existing rights, depending on what option you selected during the process.  
Read more
  • 0
  • 1
  • 9488

article-image-build-a-custom-news-feed-with-python-tutorial
Prasad Ramesh
10 Sep 2018
13 min read
Save for later

Build a custom news feed with Python [Tutorial]

Prasad Ramesh
10 Sep 2018
13 min read
To create a model a custom news feed, we need data which can be trained. This training data will be fed into a model in order to teach it to discriminate between the articles that we'd be interested in and the ones that we would not. This article is an excerpt from a book written by Alexander T. Combs titled Python Machine Learning Blueprints: Intuitive data projects you can relate to. In this article, we will learn to build a custom news corpus and annotate a large number of articles corresponding to the interests respectively. You can download the code and other relevant files used in this article from this GitHub link. Creating a supervised training dataset Before we can create a model of our taste in news articles, we need training data. This training data will be fed into our model in order to teach it to discriminate between the articles that we'd be interested in and the ones that we would not. To build this corpus, we will need to annotate a large number of articles that correspond to these interests. For each article, we'll label it either “y” or “n”. This will indicate whether the article is the one that we would want to have sent to us in our daily digest or not. To simplify this process, we will use the Pocket app. Pocket is an application that allows you to save stories to read later. You simply install the browser extension, and then click on the Pocket icon in your browser's toolbar when you wish to save a story. The article is saved to your personal repository. One of the great features of Pocket for our purposes is its ability to save the article with a tag of your choosing. We'll use this feature to mark interesting articles as “y” and non-interesting articles as “n”. Installing the Pocket Chrome extension We use Google Chrome here, but other browsers should work similarly. For Chrome, go into the Google App Store and look for the Extensions section: Image from https://chrome.google.com/webstore/search/pocket Click on the blue Add to Chrome button. If you already have an account, log in, and if you do not have an account, go ahead and sign up (it's free). Once this is complete, you should see the Pocket icon in the upper right-hand corner of your browser. It will be greyed out, but once there is an article you wish to save, you can click on it. It will turn red once the article has been saved as seen in the following images. The greyed out icon can be seen in the upper right-hand corner. Image from https://news.ycombinator.com When the icon is clicked, it turns red to indicated the article has been saved.  Image from https://www.wsj.com Now comes the fun part! Begin saving all articles that you come across. Tag the interesting ones with “y”, and the non-interesting ones with “n”. This is going to take some work. Your end results will only be as good as your training set, so you're going to to need to do this for hundreds of articles. If you forget to tag an article when you save it, you can always go to the site, http://www.get.pocket.com, to tag it there. Using the Pocket API to retrieve stories Now that you've diligently saved your articles to Pocket, the next step is to retrieve them. To accomplish this, we'll use the Pocket API. You can sign up for an account at https://getpocket.com/developer/apps/new. Click on Create New App in the upper left-hand side and fill in the details to get your API key. Make sure to click all of the permissions so that you can add, change, and retrieve articles. Image from https://getpocket.com/developer Once you have filled this in and submitted it, you will receive your CONSUMER KEY. You can find this in the upper left-hand corner under My Apps. This will look like the following screen, but obviously with a real key: Image from https://getpocket.com/developer Once this is set, you are ready to move on the the next step, which is to set up the authorizations. It requires that you input your consumer key and a redirect URL. The redirect URL can be anything. Here I have used my Twitter account: import requests auth_params = {'consumer_key': 'MY_CONSUMER_KEY', 'redirect_uri': 'https://www.twitter.com/acombs'} tkn = requests.post('https://getpocket.com/v3/oauth/request', data=auth_params) tkn.content You will see the following output: The output will have the code that you'll need for the next step. Place the following in your browser bar: https://getpocket.com/auth/authorize?request_token=some_long_code&redir ect_uri=https%3A//www.twitter.com/acombs If you change the redirect URL to one of your own, make sure to URL encode it. There are a number of resources for this. One option is to use the Python library urllib, another is to use a free online source. At this point, you should be presented with an authorization screen. Go ahead and approve it, and we can move on to the next step: usr_params = {'consumer_key':'my_consumer_key', 'code': 'some_long_code'} usr = requests.post('https://getpocket.com/v3/oauth/authorize', data=usr_params) usr.content We'll use the following output code here to move on to retrieving the stories: First, we retrieve the stories tagged “n”: no_params = {'consumer_key':'my_consumer_key', 'access_token': 'some_super_long_code', 'tag': 'n'} no_result = requests.post('https://getpocket.com/v3/get', data=no_params) no_result.text The preceding code generates the following output: Note that we have a long JSON string on all the articles that we tagged “n”. There are several keys in this, but we are really only interested in the URL at this point. We'll go ahead and create a list of all the URLs from this: no_jf = json.loads(no_result.text) no_jd = no_jf['list'] no_urls=[] for i in no_jd.values(): no_urls.append(i.get('resolved_url')) no_urls The preceding code generates the following output: This list contains all the URLs of stories that we aren't interested in. Now, let's put this in a DataFrame object and tag it as such: import pandas no_uf = pd.DataFrame(no_urls, columns=['urls']) no_uf = no_uf.assign(wanted = lambda x: 'n') no_uf The preceding code generates the following output: Now, we're all set with the unwanted stories. Let's do the same thing with the stories that we are interested in: ye_params = {'consumer_key': 'my_consumer_key', 'access_token': 'some_super_long_token', 'tag': 'y'} yes_result = requests.post('https://getpocket.com/v3/get', data=yes_params) yes_jf = json.loads(yes_result.text) yes_jd = yes_jf['list'] yes_urls=[] for i in yes_jd.values(): yes_urls.append(i.get('resolved_url')) yes_uf = pd.DataFrame(yes_urls, columns=['urls']) yes_uf = yes_uf.assign(wanted = lambda x: 'y') yes_uf The preceding code generates the following output: Now that we have both types of stories for our training data, let's join them together into a single DataFrame: df = pd.concat([yes_uf, no_uf]) df.dropna(inplace=1) df The preceding code generates the following output: Now that we're set with all our URLs and their corresponding tags in a single frame, we'll move on to downloading the HTML for each article. We'll use another free service for this called embed.ly. Using the embed.ly API to download story bodies We have all the URLs for our stories, but unfortunately this isn't enough to train on. We'll need the full article body. By itself, this could become a huge challenge if we wanted to roll our own scraper, especially if we were going to be pulling stories from dozens of sites. We would need to write code to target the article body while carefully avoiding all the othersite gunk that surrounds it. Fortunately, there are a number of free services that will do this for us. We're going to use embed.ly to do this, but there are a number of other services that you also could use. The first step is to sign up for embed.ly API access. You can do this at https://app.embed.ly/signup. This is a straightforward process. Once you confirm your registration, you will receive an API key.. You need to just use this key in your HTTPrequest. Let's do this now: import urllib def get_html(x): qurl = urllib.parse.quote(x) rhtml = requests.get('https://api.embedly.com/1/extract?url=' + qurl + '&key=some_api_key') ctnt = json.loads(rhtml.text).get('content') return ctnt df.loc[:,'html'] = df['urls'].map(get_html) df.dropna(inplace=1) df The preceding code generates the following output: With that, we have the HTML of each story. As the content is embedded in HTML markup, and we want to feed plain text into our model, we'll use a parser to strip out the markup tags: from bs4 import BeautifulSoup def get_text(x): soup = BeautifulSoup(x, 'lxml') text = soup.get_text() return text df.loc[:,'text'] = df['html'].map(get_text) df The preceding code generates the following output: With this, we have our training set ready. We can now move on to a discussion of how to transform our text into something that a model can work with. Setting up your daily personal newsletter In order to set up a personal e-mail with news stories, we're going to utilize IFTTT again. Build an App to Find Cheap Airfares, we'll use the Maker Channel to send a POST request. However, this time the payload will be our news stories. If you haven't set up the Maker Channel, do this now. Instructions can be found in Chapter 3, Build an App to Find Cheap Airfares. You should also set up the Gmail channel. Once that is complete, we'll add a recipe to combine the two. First, click on Create a Recipe from the IFTTT home page. Then, search for the Maker Channel: Image from https://www.iftt.com Select this, then select Receive a web request: Image from https://www.iftt.com Then, give the request a name. I'm using news_event: Image from https://www.iftt.com Finish by clicking on Create Trigger. Next, click on that to set up the e-mail piece. Search for Gmail and click on the icon seen as follows: Image from https://www.iftt.com Once you have clicked on Gmail, click on Send an e-mail. From here, you can customize your e-mail message. Image from https://www.iftt.com Input your e-mail address, a subject line, and finally, include Value1 in the e-mail body. We will pass our story title and link into this with our POST request. Click on Create Recipe to finalize this. Now, we're ready to generate the script that will run on a schedule automatically sending us articles of interest. We're going to create a separate script for this, but one last thing that we need to do in our existing code is serialize our vectorizer and our model: import pickle pickle.dump(model, open (r'/Users/alexcombs/Downloads/news_model_pickle.p', 'wb')) pickle.dump(vect, open (r'/Users/alexcombs/Downloads/news_vect_pickle.p', 'wb')) With this, we have saved everything that we need from our model. In our new script, we will read these in to generate our new predictions. We're going to use the same scheduling library to run the code that we used in Chapter  3, Build an App to Find Cheap Airfares. Putting it all together, we have the following script:   # get our imports. import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import LinearSVC import schedule import time import pickle import json import gspread import requests from bs4 import BeautifulSoup from oauth2client.client import SignedJwtAssertionCredentials # create our fetching function def fetch_news(): try: vect = pickle.load(open(r'/Users/alexcombs/Downloads/news_vect_pickle.p', 'rb')) model = pickle.load(open(r'/Users/alexcombs/Downloads/news_model_pickle.p', 'rb')) json_key = json.load(open(r'/Users/alexcombs/Downloads/APIKEY.json')) scope = ['https://spreadsheets.google.com/feeds'] credentials = SignedJwtAssertionCredentials(json_key['client_email'], json_key['private_key'].encode(), scope) gc = gspread.authorize(credentials) ws = gc.open("NewStories") sh = ws.sheet1 zd = list(zip(sh.col_values(2), sh.col_values(3), sh.col_values(4))) zf = pd.DataFrame(zd, columns=['title', 'urls', 'html']) zf.replace('', pd.np.nan, inplace=True) zf.dropna(inplace=True) def get_text(x): soup = BeautifulSoup(x, 'lxml') text = soup.get_text() return text zf.loc[:, 'text'] = zf['html'].map(get_text) tv = vect.transform(zf['text']) res = model.predict(tv) rf = pd.DataFrame(res, columns=['wanted']) rez = pd.merge(rf, zf, left_index=True, right_index=True) news_str = '' for t, u in zip(rez[rez['wanted'] == 'y']['title'], rez[rez['wanted'] == 'y']['urls']): news_str = news_str + t + '\n' + u + '\n' payload = {"value1": news_str} r = requests.post('https://maker.ifttt.com/trigger/news_event/with/key/IFTTT_KE Y', data=payload) # cleanup worksheet lenv = len(sh.col_values(1)) cell_list = sh.range('A1:F' + str(lenv)) for cell in cell_list: cell.value = "" sh.update_cells(cell_list) print(r.text) except: print('Failed') schedule.every(480).minutes.do(fetch_news) while 1: schedule.run_pending() time.sleep(1) What this script will do is run every 4 hours, pull down the news stories from Google Sheets, run the stories through the model, generate an e-mail by sending a POST request to IFTTT for the stories that are predicted to be of interest, and then finally, it will clear out the stories in the spreadsheet so that only new stories get sent in the next e-mail. Congratulations! You now have your own personalize news feed! In this tutorial we learned how to create a custom news feed, to know more about setting it up and other intuitive Python projects, check out Python Machine Learning Blueprints: Intuitive data projects you can relate to. Writing web services with functional Python programming [Tutorial] Visualizing data in R and Python using Anaconda [Tutorial] Python 3.7 beta is available as the second generation Google App Engine standard runtime
Read more
  • 0
  • 0
  • 9471
article-image-troubleshooting-in-sql-server
Sunith Shetty
15 Mar 2018
16 min read
Save for later

Troubleshooting in SQL Server

Sunith Shetty
15 Mar 2018
16 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book SQL Server 2017 Administrator's Guide written by Marek Chmel and Vladimír Mužný. This book will help you learn to implement and administer successful database solution with SQL Server 2017.[/box] Today, we will perform SQL Server analysis, and also learn ways for efficient performance monitoring and tuning. Performance monitoring and tuning Performance monitoring and tuning is a crucial part of your database administration skill set so as to keep the performance and stability of your server great, and to be able to find and fix the possible issues. The overall system performance can decrease over time; your system may work with more data or even become totally unresponsive. In such cases, you need the skills and tools to find the issue to bring the server back to normal as fast as possible. We can use several tools on the operating system layer and, then, inside the SQL Server to verify the performance and the possible root cause of the issue. The first tool that we can use is the performance monitor, which is available on your Windows Server: Performance monitor can be used to track important SQL Server counters, which can be very helpful in evaluating the SQL Server Performance. To add a counter, simply right-click on the monitoring screen in the Performance monitoring and tuning section and use the Add Counters item. If the SQL Server instance that you're monitoring is a default instance, you will find all the performance objects listed as SQL Server. If your instance is named, then the performance objects will be listed as MSSQL$InstanceName in the list of performance objects. We can split the important counters to watch between the system counters for the whole server and specific SQL Server counters. The list of system counters to watch include the following: Processor(_Total)% Processor Time: This is a counter to display the CPU load. Constant high values should be investigated to verify whether the current load does not exceed the performance limits of your HW or VM server, or if your workload is not running with proper indexes and statistics, and is generating bad query plans. MemoryAvailable MBytes: This counter displays the available memory on the operating system. There should always be enough memory for the operating system. If this counter drops below 64MB, it will send a notification of low memory and the SQL Server will reduce the memory usage. Physical Disk—Avg. Disk sec/Read: This disk counter provides the average latency information for your storage system; be careful if your storage is made of several different disks to monitor the proper storage system. Physical Disk: This indicates the average disk writes per second. Physical Disk: This indicates the average disk reads per second. Physical Disk: This indicates the number of disk writes per second. System—Processor Queue Length: This counter displays the number of threads waiting on a system CPU. If the counter is above 0, this means that there are more requests than the CPU can handle, and if the counter is constantly above 0, this may signal performance issues. Network interface: This indicates the total number of bytes per second. Once you have added all these system counters, you can see the values real time or you can configure a data collection, which will run for a specified selected time and periodically collect the information: With SQL Server-specific counters, we can dig deeper into the CPU, memory, and storage utilization to see what the SQL Server is doing and how the SQL Server is utilizing the subsystems. SQL Server memory monitoring and troubleshooting Important counters to watch for SQL Server memory utilization include counters from the SQL Server: Buffer Manager performance object and from SQL Server:Memory Manager: Important counters to watch for SQL Server memory utilization include counters from the SQL Server: Buffer Manager performance object and from SQL Server:Memory Manager: SQLServer-Buffer Manager—buffer cache hit ratio: This counter displays the ratio of how often the SQL Server can find the proper data in the cache when a query returns such data. If the data is not found in the cache, it has to be read from the disk. The higher the counter, the better the overall performance, since the memory access is usually faster compared to the disk subsystem. SQLServer-Buffer Manager—page life expectancy: This counter can measure how long a page can stay in the memory in seconds. The longer a page can stay in the memory, the less likely it will be for the SQL Server to need to access the disk in order to get the data into the memory again. SQL Server-Memory Manager—total server memory (KB): This is the amount of memory the server has committed using the memory manager. SQL Server-Memory Manager—target server memory (KB): This is the ideal amount of memory the server can consume. On a stable system, the target and total should be equal unless you face a memory pressure. Once the memory is utilized after the warm-up of your server, these two counters should not drop significantly, which would be another indication of system-level memory pressure, where the SQL Server memory manager has to deallocate memory. SQL Server-Memory Manager—memory grants pending: This counter displays the total number of SQL Server processes that are waiting to be granted memory from the memory manager. To check the performance counters, you can also use a T-SQL query, where you can query the sys.dm_os_performance_counters DMV: SELECT [counter_name] as [Counter Name], [cntr_value]/1024 as [Server Memory (MB)] FROM sys.dm_os_performance_counters WHERE  [object_name] LIKE '%Memory Manager%'  AND [counter_name] IN ('Total Server Memory (KB)', 'Target Server Memory (KB)') This query will return two values—one for target memory and one for total memory. These two should be close to each other on a warmed up system. Another query you can use is to get the information from a DMV named sys.dm_0s_sys_memory: SELECT total_physical_memory_kb/1024/1024 AS [Physical Memory (GB)],    available_physical_memory_kb/1024/1024 AS [Available Memory (GB)], system_memory_state_desc AS [System Memory State] FROM sys.dm_os_sys_memory WITH (NOLOCK) OPTION (RECOMPILE) This query will display the available physical memory and the total physical memory of your server with several possible memory states: Available physical memory is high (this is a state you would like to see on your system, indicating there is no lack of memory) Physical memory usage is steady Available physical memory is getting low Available physical memory is low The memory grants can be verified with a T-SQL query: SELECT [object_name] as [Object name] , cntr_value AS [Memory Grants Pending] FROM sys.dm_os_performance_counters WITH (NOLOCK) WHERE  [object_name] LIKE N'%Memory Manager%'  AND counter_name = N'Memory Grants Pending' OPTION (RECOMPILE); If you face memory issues, there are several steps you can take for improvements: Check and configure your SQL Server max memory usage Add more RAM to your server; the limit for Standard Edition is 128 GB and there is no limit for SQL Server with Enterprise Use Lock Pages in Memory Optimize your queries SQL Server storage monitoring and troubleshooting The important counters to watch for SQL Server storage utilization would include counters from the SQL Server:Access Methods performance object: SQL Server-Access Methods—Full Scans/sec: This counter displays the number of full scans per second, which can be either table or full-index scans SQL Server-Access Methods—Index Searches/sec: This counter displays the number of searches in the index per second SQL Server-Access Methods—Forwarded Records/sec: This counter displays the number of forwarded records per second Monitoring the disk system is crucial, since your disk is used as a storage for the following: Data files Log files tempDB database Page file Backup files To verify the disk latency and IOPS metric of your drives, you can use the Performance monitor, or the T-SQL commands, which will query the sys.dm_os_volume_stats and sys.dm_io_virtual_file_stats DMF. Simple code to start with would be a T-SQL script utilizing the DMF to check the space available within the database files: SELECT f.database_id, f.file_id, volume_mount_point, total_bytes, available_bytes FROM sys.master_files AS f CROSS APPLY sys.dm_os_volume_stats(f.database_id, f.file_id); To check the I/O file stats with the second provided DMF, you can use a T-SQL code for checking the information about tempDB data files: SELECT * FROM sys.dm_io_virtual_file_stats (NULL, NULL) vfs join sys.master_files mf on mf.database_id = vfs.database_id and mf.file_id = vfs.file_id WHERE mf.database_id = 2 and mf.type = 0 To measure the disk performance, we can use a tool named DiskSpeed, which is a replacement for older SQLIO tool, which was used for a long time. DiskSpeed is an external utility, which is not available on the operating system by default. This tool can be downloaded from GitHub or the Technet Gallery at https://github.com/microsoft/diskspd. The following example runs a test for 15 seconds using a single thread to drive 100 percent random 64 KB reads at a depth of 15 overlapped (outstanding) I/Os to a regular file: DiskSpd –d300 -F1 -w0 -r –b64k -o15 d:datafile.dat Troubleshooting wait statistics We can use the whole Wait Statistics approach for a thorough understanding of the SQL Server workload and undertake performance troubleshooting based on the collected data. Wait Statistics are based on the fact that, any time a request has to wait for a resource, the SQL Server tracks this information, and we can use this information for further analysis. When we consider any user process, it can include several threads. A thread is a single unit of execution on SQL Server, where SQLOS controls the thread scheduling instead of relying on the operating system layer. Each processor core has it's own scheduler component responsible for executing such threads. To see the available schedulers in your SQL Server, you can use the following query: SELECT * FROM sys.dm_os_schedulers Such code will return all the schedulers in your SQL Server; some will be displayed as visible online and some as hidden online. The hidden ones are for internal system tasks while the visible ones are used by user tasks running on the SQL Server. There is one more scheduler, which is displayed as Visible Online (DAC). This one is used for dedicated administration connection, which comes in handy when the SQL Server stops responding. To use a dedicated admin connection, you can modify your SSMS connection to use the DAC, or you can use a switch with the sqlcmd.exe utility, to connect to the DAC. To connect to the default instance with DAC on your server, you can use the following command: sqlcmd.exe -E -A Each thread can be in three possible states: running: This indicates that the thread is running on the processor suspended: This indicates that the thread is waiting for a resource on a waiter list runnable:  This indicates that the thread is waiting for execution on a runnable queue Each running thread runs until it has to wait for a resource to become available or until it has exhausted the CPU time for a running thread, which is set to 4 ms. This 4 ms time can be visible in the output of the previous query to sys.dm_os_schedulers and is called a quantum. When a thread requires any resource, it is moved away from the processor to a waiter list, where the thread waits for the resource to become available. Once the resource is available, the thread is notified about the resource availability and moves to the bottom of the runnable queue. Any waiting thread can be found via the following code, which will display the waiting threads and the resource they are waiting for: SELECT * FROM sys.dm_os_waiting_tasks The threads then transition between the execution at the CPU, waiter list, and runnable queue. There is a special case when a thread does not need to wait for any resource and has already run for 4 ms on the CPU, then the thread will be moved directly to the runnable queue instead of the waiter list. In the following image, we can see the thread states and the objects where the thread resides: When the thread is waiting on the waiter list, we can talk about a resource wait time. When the thread is waiting on the runnable queue to get on the CPU for execution, we can talk about the signal time. The total wait time is, then, the sum of the signal and resource wait times. You can find the ratio of the signal to resource wait times with the following script: Select signalWaitTimeMs=sum(signal_wait_time_ms)  ,'%signal waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))  ,resourceWaitTimeMs=sum(wait_time_ms - signal_wait_time_ms)  ,'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2)) from sys.dm_os_wait_stats When the ratio goes over 30 percent for the signal waits, then there will be a serious CPU pressure and your processor(s) will have a hard time handling all the incoming requests from the threads. The following query can then grab the wait statistics and display the most frequent wait types, which were recorded through the thread executions, or actually during the time the threads were waiting on the waiter list for any particular resource: WITH [Waits] AS  (SELECT    [wait_type],   [wait_time_ms] / 1000.0 AS [WaitS], ([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS], [signal_wait_time_ms] / 1000.0 AS [SignalS], [waiting_tasks_count] AS [WaitCount], 100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage], ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum] FROM sys.dm_os_wait_stats WHERE [wait_type] NOT IN ( N'BROKER_EVENTHANDLER', N'BROKER_RECEIVE_WAITFOR', N'BROKER_TASK_STOP', N'BROKER_TO_FLUSH', N'BROKER_TRANSMITTER', N'CHECKPOINT_QUEUE', N'CHKPT', N'CLR_AUTO_EVENT', N'CLR_MANUAL_EVENT', N'CLR_SEMAPHORE', N'DIRTY_PAGE_POLL', N'DISPATCHER_QUEUE_SEMAPHORE', N'EXECSYNC', N'FSAGENT', N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'FT_IFTSHC_MUTEX', N'HADR_CLUSAPI_CALL', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION', N'HADR_LOGCAPTURE_WAIT', N'HADR_NOTIFICATION_DEQUEUE', N'HADR_TIMER_TASK', N'HADR_WORK_QUEUE', N'KSOURCE_WAKEUP', N'LAZYWRITER_SLEEP', N'LOGMGR_QUEUE', N'MEMORY_ALLOCATION_EXT', N'ONDEMAND_TASK_QUEUE', N'PREEMPTIVE_XE_GETTARGETSTATE', N'PWAIT_ALL_COMPONENTS_INITIALIZED', N'PWAIT_DIRECTLOGCONSUMER_GETNEXT', N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP', N'QDS_ASYNC_QUEUE', N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP', N'QDS_SHUTDOWN_QUEUE', N'REDO_THREAD_PENDING_WORK', N'REQUEST_FOR_DEADLOCK_SEARCH', N'RESOURCE_QUEUE', N'SERVER_IDLE_CHECK', N'SLEEP_BPOOL_FLUSH', N'SLEEP_DBSTARTUP', N'SLEEP_DCOMSTARTUP', N'SLEEP_MASTERDBREADY', N'SLEEP_MASTERMDREADY', N'SLEEP_MASTERUPGRADED', N'SLEEP_MSDBSTARTUP', N'SLEEP_SYSTEMTASK', N'SLEEP_TASK', N'SLEEP_TEMPDBSTARTUP', N'SNI_HTTP_ACCEPT', N'SP_SERVER_DIAGNOSTICS_SLEEP', N'SQLTRACE_BUFFER_FLUSH', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', N'SQLTRACE_WAIT_ENTRIES', N'WAIT_FOR_RESULTS', N'WAITFOR', N'WAITFOR_TASKSHUTDOWN', N'WAIT_XTP_RECOVERY', N'WAIT_XTP_HOST_WAIT', N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG', N'WAIT_XTP_CKPT_CLOSE', N'XE_DISPATCHER_JOIN', N'XE_DISPATCHER_WAIT', N'XE_TIMER_EVENT' ) AND [waiting_tasks_count] > 0 ) SELECT MAX ([W1].[wait_type]) AS [WaitType], CAST (MAX ([W1].[WaitS]) AS DECIMAL (16,2)) AS [Wait_S], CAST (MAX ([W1].[ResourceS]) AS DECIMAL (16,2)) AS [Resource_S], CAST (MAX ([W1].[SignalS]) AS DECIMAL (16,2)) AS [Signal_S], MAX ([W1].[WaitCount]) AS [WaitCount], CAST (MAX ([W1].[Percentage]) AS DECIMAL (5,2)) AS [Percentage], CAST ((MAX ([W1].[WaitS]) / MAX ([W1].[WaitCount])) AS DECIMAL (16,4)) AS [AvgWait_S], CAST ((MAX ([W1].[ResourceS]) / MAX ([W1].[WaitCount])) AS DECIMAL (16,4)) AS [AvgRes_S], CAST ((MAX ([W1].[SignalS]) / MAX ([W1].[WaitCount])) AS DECIMAL (16,4)) AS [AvgSig_S] FROM [Waits] AS [W1] INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] <= [W1].[RowNum] GROUP BY [W1].[RowNum] HAVING SUM ([W2].[Percentage]) - MAX( [W1].[Percentage] ) < 95 GO This code is available on the whitepaper, published by SQLSkills, named SQL Server Performance Tuning Using Wait Statistics by Erin Stellato and Jonathan Kehayias, which then refers the URL on SQL Skills and uses the full query by Paul Randal available at https://www.sqlskills.com/ blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/. Some of the typical wait stats you can see are: PAGEIOLATCH The PAGEIOLATCH wait type is used when the thread is waiting for a page to be read into the buffer pool from the disk. This wait type comes with two main forms: PAGEIOLATCH_SH: This page will be read from the disk PAGEIOLATCH_EX: This page will be modified You may quickly assume that the storage has to be the problem, but that may not be the case. Like any other wait, they need to be considered in correlation with other wait types and other counters available to correctly find the root cause of the slow SQL Server operations. The page may be read into the buffer pool, because it was previously removed due to memory pressure and is needed again. So, you may also investigate the following: Buffer Manager: Page life expectancy Buffer Manager: Buffer cache hit ratio Also, you need to consider the following as a possible factor to the PAGEIOLATCH wait types: Large scans versus seeks on the indexes Implicit conversions Inundated statistics Missing indexes PAGELATCH This wait type is quite frequently misplaced with PAGEIOLATCH but PAGELATCH is used for pages already present in the memory. The thread waits for the access to such a page again with possible PAGELATCH_SH and PAGELATCH_EX wait types. A pretty common situation with this wait type is a tempDB contention, where you need to analyze what page is being waited for and what type of query is actually waiting for such a resource. As a solution to the tempDB, contention you can do the following: Add more tempDB data files Use traceflags 1118 and 1117 for tempDB on systems older than SQL Server 2016 CXPACKET This wait type is encountered when any thread is running in parallel. The CXPACKET wait type itself does not mean that there is really any problem on the SQL Server. But if such a wait type is accumulated very quickly, it may be a signal of skewed statistics, which require an update or a parallel scan on the table where proper indexes are missing. The option for parallelism is controlled via MAX DOP setting, which can be configured on the following: The server level The database level A query level with a hint We learned about SQL Server analysis with the Wait Statistics troubleshooting methodology and possible DMVs to get more insight on the problems occurring in the SQL Server. To know more about how to successfully create, design, and deploy databases using SQL Server 2017, do checkout the book SQL Server 2017 Administrator's Guide.
Read more
  • 0
  • 0
  • 9463

article-image-basic-skills-traits-and-competencies-manager
Packt
29 May 2012
18 min read
Save for later

Basic Skills, Traits, and Competencies of a Manager

Packt
29 May 2012
18 min read
In India, being a Manager is highly valued. A majority of people see themselves taking a managerial position some day. However, can anyone become a manager? A really good manager? Are managers born or made? Do all managers, at least all good managers, share something in common? When we look around and see the journeys being taken by different managers, their working styles and behaviors, we can hypothesize that: Managers are born and made. Some folks have a natural flair to be a manager and some acquire essential skills to be a manager in a given situation. Not everyone may enjoy being a manager. While you may be 'promoted' to become a manager, you may find that you don't really enjoy the time spent talking to people, driving them to results, and compiling status reports for your management. It appears that good managers do have many things in common, even though they may have their own style of execution. In this article by Rahul Goyal author of Management in India: Grow from an Accidental to a Successful Manager in the IT & Knowledge Industry , we will explore the skills, traits, talents, and competencies that are usually required and expected for playing a manager role, and also burst some myths surrounding managers. (For more resources on management, see here.) Skills, traits, talents, and competencies We all have heard these terms. Let's try to understand what they mean and how they are different or similar to each other. Skills Skill is defined as the ability or capacity to do something, acquired through specific training. Skills are learned abilities. Technically, anybody can take a course in a specific subject and acquire that skill. Of course, the person should have the aptitude to learn those skills. Developing skills does not need to be in a formally-structured or schooled way. Babies develop motor skills as a natural process of learning. People develop communication skills, which are part formal learning and part informal learning. How well somebody can translate that acquired ability, that is, skill, goes beyond the definition of skill. In order to be an engineer, you need to acquire the engineering skills, or in order to become a chef, you need to acquire cooking skills. This alone will not make you a good chef or an engineer. Traits Traits are at the other end of the spectrum. Traits are personal. Traits are often linked to a person's character. Being shy is a trait. Some people are introverts and others are extroverts. Traits determine your response or behavior in a wide variety of situations. Some people are fearless by nature and others are cautious. Traits are often described in pairs of opposing behaviors; for example, extrovert-introvert and honest-dishonest. Many people consider traits to be innate, and that can definitely be true. However, it is not always true. There are traits that people develop by their upbringing and the environment they live in. As people progress through life they acquire new traits or modify ones they already have; these are called learned traits. Also, people display contradictory traits, so an honest person can become dishonest and vice versa. Talents Talent is an oft-used word in business today. A pure definition of talent from Webster's Dictionary (1913) is as follows: Intellectual ability, natural or acquired; mental endowment or capacity; skill in accomplishing; a special gift, particularly in business, art, or the like; faculty. It sounds like a lot of things, but the key phrases are intellectual, natural, and skill in accomplishing. Talents are supposed to be God's gift to you being applied to a specific craft or job. Specific application is the key phrase here.   It is very possible that Yuvraj Singh could have become a successful soccer player had he chosen to pursue that. Anyway, we are all glad that he chose cricket. Michael Jordan, the basketball legend, is an excellent golfer now and he tried his hand at professional baseball as well. Both Yuvraj and Jordan have most certainly a combination of different talents, such as physical stamina, focus, and discipline, which when applied to a particular sport created a great performance. Competencies Competencies are behaviors an employee displays in order to translate the knowledge and skills and leverage the traits to deliver a performance on the job. Competencies are related to a given job function. Hence different jobs will require different competencies. An offshore software engineer needs to have the necessary technical skills to write the code and written and verbal communication skills to effectively communicate across the world, among others. In this case, the communication competency is highly valuable, given the offshore nature of the work. If the job description changes to that of a software engineer working as a database administrator, a slightly different set of competencies apply. While related technical skills are very important, in this case, expertise will be desired given the fact that databases are critical to the business and scope for errors is less. Communication competencies are always required but basic communication may be enough for this function. However, a meticulous attitude and handling high levels of stress will be important, given the requirement criticality of the infrastructure. Competencies are the application of all that we know and can do. Almost all employers describe a job function in terms of competencies and results required. Also, almost all employee appraisal forms will attempt to grade people in terms of competencies on some scale. For example, a competency of 'Result Orientation' may be measured on a scale of 1 through 5, and an appraiser may be advised to comment on the reasons behind the rating. Top skills, traits, and competencies expected of a manager Let's look at some key skills, traits, and competencies that are expected of a good manager. Love of working with people Most managers will spend a majority of their time managing people, and everything that is connected with people, even more so in the knowledge industry. Do you find yourself talking to people all the time? Do people tend to bring their problems to you? And when they do so, do you see it as adding value to finding a solution, or do you see that as a headache which you shouldn't have to deal with? If you find satisfaction in just being with people and helping them achieve their results, you have a primary quality to be a manager. Going to parties and having a good time with people also displays that you love being around people and surely shows your love for food or drink, but not the essential part of helping people achieve their goals. Although all interactions count, including phone conversations, e-mail, or Instant Messenger, it's the face time that has the most impact. If you'd rather spend your time in your own office by yourself, perhaps a manager role isn't for you. You can, of course, force yourself to spend time with people as part of the job requirement, but unless you really enjoy that time, it will be hard to sustain and excel as a manager. You may end up limiting your interactions to a select few, where comfort levels are high, at the risk of alienating others. Global managers today get less and less face time with some of their team members, sometimes as little as 12 hours in the entire working year. Without the love of working with people, the interaction with remote workers can really become difficult, as it'll take an extra effort to be connected at a deeper level than just work. If you like to work with people, you are likely to be high on empathy. When people approach you with a problem, you may feel the problem to be your own. Even before the person tells you there's a problem, you already know there is a problem by the look on his/her face, the voice, and the body language. Your body language will be inviting and welcoming. While the person describes their problem to you, you listen intently and non-judgmentally, even supporting them so that he/she is encouraged to open up. If you are high on empathy, you may also have a feel of what kind of suggestion will work with this person and how it should be put across. You follow up and when you see the person getting over the problem you feel a sense of satisfaction. Empathy also helps in understanding and working in a diverse environment, for example, working with people who grew up culturally different from you. Especially in India, there is a high degree of diversity with people from different backgrounds, and also while the working population is highly skewed towards men, women have a growing presence, especially in the knowledge industry. Please note that a manager doesn't have to be an extrovert to love working with people. Extroversion is often equated with being outgoing, and that isn't the same as having a love of working with people. Myth: nice manager Sometimes managers wish to be seen as popular, someone who everyone wants to work for. A nice manager, who listens to his people and rarely says no to anything, be it taking vacations or a promotion. Being a people person doesn't mean being nice all the time. While being a people person is a great thing, usual business rules still apply. A good manager balances the priorities of the people and business and can be nice and tough at the same time. Easy to approach While you may love to work with people, the people around you should also love to work with you, and a measure of that is the number of people who feel comfortable coming up and talking to you. A non-threatening, if not friendly, demeanor would certainly help. But even more important is the rest of the interaction that will follow. Do people come to you for problem solving and leave with more problems to solve? Do they come to you to share the overload of work and leave with more work to do? Is there a fair give and take in your interactions with people? Myth: I'm easy to approach, I have an open door policy Approachability is not to be confused with accessibility. Accessibility is a measure of the number of channels and the time you are accessible to others. Today, channels of accessibility are hardly an issue, given the multiple modes of contact, including Instant Messengers. Time availability will always remain an issue and you'll have to consciously make time for people. Approachability isn't the same as availability or an open door policy. Your approachability is defined by the way you respond to people's attempts to get in touch with you. Do you respond quickly and positively, or do you buzz them off for a few days? Do you have a friendly disposition towards people? Do you let people speak? Do you listen to what they have to say before responding? All of these define the degree of approachability you exhibit. Farmer mentality: sow, nurture, grow, reap There are thousands of types of jobs, but none of them is as involved, as complete, and perhaps as spiritual as farming. It requires hard work, investment, belief, knowledge, teamwork, patience, faith, ownership, and a sense of creativity. And of course, the elements of risk, especially in India where farmers still depend on the monsoon. Farmers go through a cycle of preparation, investment, nurturing, protection, seeing it grow, and then enjoying the benefits of all the effort. They go through this year after year and while they make it better every cycle, they take the losses when decisions go bad. Nevertheless, the basic approach remains the same. Managers need to develop the traits of a farmer. You need to have a sense of preparation and investment, since it's the most basic, key part of the process and then wait while nurturing and supporting for the benefits to roll in. This needs to be done with every person, process, and project. Myth: fast moving managers—in a tearing hurry Some people believe that a hotshot manager is always juggling many tasks and pushes everyone to move faster, but that simply isn't true. Most exceptional managers have a farmer mentality. Farmers are always required to be patient. You can't push certain processes to be faster than the natural cycle. You can help and catalyze, but improvements are usually marginal and need to be evaluated for the long term. Too much of anything, even the catalyst, can yield bad results. Managers also need to be patient and respect the personal growth cycle for each individual and for different processes. Managers can help catalyze the process but need to allow the cycle to take it's own course. Once a growth or improvement cycle is over, the next growth cycle can start. Core values: honesty, integrity, truthfulness, trustworthiness, consideration for others, and more There is no substitute for core values like honesty, integrity, and trustworthiness. These are very important for any employee in general, and are even more important for managers, as managers have a high impact on people and processes. There will be many challenges that will come a manager's way and many decisions that managers need to make. Core values will be a guide in all of these. Many questions cannot be answered by looking at the rulebook, but very easily answered by using the value system yardstick. There will be lots of opportunities for a manager to make quick gains, by using a shortcut and possibly lowering the value standard. This would usually be impossible to sustain, and will come back to haunt you in the longer term. Consider this: Vijay comes to his manager's office and expresses the monetary problems he is facing. He is a good contributor and quite important to the current project. Vijay mentions that he has an offer for about 30 percent more than what he makes right now and although he likes the company, he'd like to resign. The manager's options are to relieve him in a month's time or promise him more than 30 percent in a few months' time when the annual salary revision is due. The manager knows that it may not really be possible to give Vijay 30 percent because the expected budget may not allow that; however, Vijay may stay until then and the project will be past the critical stage. At the same time, the manager is not breaking any rules, as he is fine with giving Vijay a 30 percent raise if the budget is available, plus the manager can always say that upper management rejected the change. Even simpler situations such as taking a day off for being sick while you are really not or using the official network and resources to watch adult material or taking office stationary home are all situations which call for basic values to be applied. Values are the foundation of good behavior and nothing less is expected from a leader. Not a myth: corporate greed The recent financial crisis the world underwent is a grim reminder of corporate greed, which of course is a result of a few individuals propagating a culture of greed through the system. Poor governance and integrity standards have led to many a scandal with dire consequences. Satyam in India or Worldcom in the US have cost thousands of jobs and the loss of credibility for the entire industry. Tolerance for ambiguity and patience We all know: the better the map, the easier it is to follow, but unfortunately the working map in an organization is not always clear. Sometimes the destination is not clear and there are multiple ways to get there and also, there are too many detours. You'd be lucky if the map does not change half way through. It would be great if the directions to follow were clear, but who is supposed to make it clear and easy to follow? A manager needs to deal with this ambiguity—find the best way given all the other factors. Ambiguity is the order of today's knowledge industry. A lot of things are fuzzy and need definition. It takes time to remove some of the fuzziness, and a manager needs to deal with it. It will require tolerance for fuzziness and patience to figure it out. Some people are pre-disposed to display patience, and others can learn to be patient. Patience defines the quality of your daily interactions, your responses, and some people believe, your respect towards others' opinions. Simple day-to-day necessity, like good communication, requires you to be patient. Patient people wait for others to complete their thoughts so they can take the time to respond well and with complete information. Good communication skills—especially listening Communication is the bread and butter for a manager. There is a lot of information which needs to be processed and communicated by a manager in all directions, to his directs and beyond, to his management chain, and also to many other parallel groups. Communication is NOT smooth talking. Many people confuse good communication with fast talking or smooth talking, where one person dominates a discussion and the other party. Good communication is not a love of talking. A rather quiet person, who can listen to others and respond with clarity, is a much more powerful communicator than somebody who simply loves to talk. Communication includes all forms of communication, the usual written communication such as e-mail and formal memos, letters, and so on. New age communication such as SMS, Instant messaging, and so on, and verbal communication, via phone or video conference and with the person face-to-face. Body language is also part of communication, although it's becoming less of a factor given that a majority of communication is not face-to-face anymore. Even people who sit half a floor away communicate via e-mail or IM. The tone of your voice over phone or tone of your instant message plays an important part in perception of the message. Good communication skills also include understanding your audience and communication in such a way that the audience can understand and communicate back to you. As such, your communication style will change a little based on the audience it's intended for. Finally, the single biggest factor in good communication is listening. Unfortunately, the importance of listening gets lost very often and a large population of people suffer from a lack of listening. Especially in India, people tend to cut into a discussion or start talking before the other person has finished, and perhaps get impatient to answer with the assumption that they know what the other person is talking about. Indian managers do need to work twice as hard to develop good listening skills. Myth: quiet people can't be managers Many people believe that managers are people who stand up and speak at every opportunity. It's not uncommon to see meetings where the manager takes all the talking time with very little being said by anyone else. Remember the term talkative. The term instantaneously takes us back to middle school, when kids who would talk too much in the class were called talkative. It's often believed that managers need to be talkative. At every opportunity they get, they talk. It is indeed true that a large number of managers tend to talk too much, and unfortunately the problem grows over the years. Over time, people tend to avoid managers who ramble. You can be a quiet person as such, and as long as you don't shy away from speaking when it is required, quietness will be a strength. I have been fortunate to meet a lot of highly successful managers, who are quiet by usual standards but have an impeccable record of delivery and team management. Team building—hiring, retaining, developing good people, and nurturing team spirit Another key competency for a manager is to be able to build teams. Although, at a literal level, a team is made up of a set of people, in reality a team isn't really a team without the binding glue called team spirit. A manager is as good as the team he/ she builds. A manager's capacity and ability to deliver is equal to the capacity and ability of the team. To start with, building teams requires good hiring skills. It requires: Position identification. Defining skill requirements for the position. Defining the process for identification and skills testing. Most organizations will have a pre-defined process and supporting team to do this. Look for fitment. Deciding appropriate compensation. Following required organizational process for completing the hiring process. Besides having a team, it is important to configure a team. For example, a team of 10 people may need to be balanced in terms of experience, youth and freshness, and a variety of technical skills. A team needs to have defined positions and each team member should know what role and position he/she is supposed to play. Finally, a manager needs to create an environment to foster team spirit and bonding, so that a set of people works as a team and not as multiple individuals. Once a team is in place, a manager needs to constantly nurture the team and also the individuals. Most people love to work in a team, but they are individuals too and have unique needs and aspirations. This will lead to better retention, which is a definite success criterion for a manager in today's knowledge industry.
Read more
  • 0
  • 0
  • 9440

article-image-why-we-need-design-patterns
Packt
10 Nov 2016
16 min read
Save for later

Why we need Design Patterns?

Packt
10 Nov 2016
16 min read
In this article by Praseed Pai, and Shine Xavier, authors of the book .NET Design Patterns, we will try to understand the necessity of choosing a pattern-based approach to software development. We start with some principles of software development, which one might find useful while undertaking large projects. The working example in the article starts with a requirements specification and progresses towards a preliminary implementation. We will then try to iteratively improve the solution using patterns and idioms, and come up with a good design that supports a well-defined programming Interface. In this process, we will learn about some software development principles (listed below) one can adhere to, including the following: SOLID principles for OOP Three key uses of design patterns Arlow/Nuestadt archetype patterns Entity, value, and data transfer objects Leveraging the .NET Reflection API for plug-in architecture (For more resources related to this topic, see here.) Some principles of software development Writing quality production code consistently is not easy without some foundational principles under your belt. The purpose of this section is to whet the developer's appetite, and towards the end, some references are given for detailed study. Detailed coverage of these principles warrants a separate book on its own scale. The authors have tried to assimilate the following key principles of software development which would help one write quality code: KISS: Keep it simple, Stupid DRY: Don't repeat yourself YAGNI: You aren't gonna need it Low coupling: Minimize coupling between classes SOLID principles: Principles for better OOP William of Ockham had framed the maxim Keep it simple, Stupid (KISS). It is also called law of parsimony. In programming terms, it can be translated as "writing code in a straightforward manner, focusing on a particular solution that solves the problem at hand". This maxim is important because, most often, developers fall into the trap of writing code in a generic manner for unwarranted extensibility. Even though it initially looks attractive, things slowly go out of bounds. The accidental complexity introduced in the code base for catering to improbable scenarios, often reduces readability and maintainability. The KISS principle can be applied to every human endeavor. Learn more about KISS principle by consulting the Web. Don't repeat yourself (DRY), a maxim which most programmers often forget while implementing their domain logic. Most often, in a collaborative development scenario, code gets duplicated inadvertently due to lack of communication and proper design specifications. This bloats the code base, induces subtle bugs, and make things really difficult to change. By following the DRY maxim at all stages of development, we can avoid additional effort and make the code consistent. The opposite of DRY is write everything twice (WET). You aren't gonna need it (YAGNI), a principle that compliments the KISS axiom. It serves as a warning for people who try to write code in the most general manner, anticipating changes right from the word go,. Too often, in practice, most of this code is not used to make potential code smells. While writing code, one should try to make sure that there are no hard-coded references to concrete classes. It is advisable to program to an interface as opposed to an implementation. This is a key principle which many patterns use to provide behavior acquisition at runtime. A dependency injection framework could be used to reduce coupling between classes. SOLID principles are a set of guidelines for writing better object-oriented software. It is a mnemonic acronym that embodies the following five principles: 1 Single Responsibility Principle (SRP) A class should have only one responsibility. If it is doing more than one unrelated thing, we need to split the class. 2 Open Close Principle (OCP) A class should be open for extension, closed for modification. 3 Liskov Substitution Principle (LSP) Named after Barbara Liskov, a Turing Award laureate, who postulated that a sub-class (derived class) could substitute any super class (base class) references without affecting the functionality. Even though it looks like stating the obvious, most implementations have quirks which violate this principle. 4 Interface segregation principle (ISP) It is more desirable to have multiple interfaces for a class (such classes can also be called components) than having one Uber interface that forces implementation of all methods (both relevant and non-relevant to the solution context). 5 Dependency Inversion (DI) This is a principle which is very useful for Framework design. In the case of Frameworks, the client code will be invoked by server code, as opposed to the usual process of client invoking the server. The main principle here is that abstraction should not depend upon details, rather, details should depend upon abstraction. This is also called the "Hollywood Principle" (Do not call us, we will call you back). The authors consider the preceding five principles primarily as a verification mechanism. This will be demonstrated by verifying the ensuing case study implementations for violation of these principles. Karl Seguin has written an e-book titled Foundations of Programming – Building Better Software, which covers most of what has been outlined here. Read his book to gain an in-depth understanding of most of these topics. The SOLID principles are well covered in the Wikipedia page on the subject, which can be retrieved from https://en.wikipedia.org/wiki/SOLID_(object-oriented_design). Robert Martin's Agile Principles, Patterns, and Practices in C# is a definitive book on learning about SOLID, as Robert Martin itself is the creator of these principles, even though Michael Feathers coined the acronym. Why patterns are required? According to the authors, the three key advantages of pattern-oriented software development that stand out are as follows: A language/platform-agnostic way to communicate about software artifacts A tool for refactoring initiatives (targets for refactoring) Better API design With the advent of the pattern movement, the software development community got a canonical language to communicate about software design, architecture, and implementation. Software development is a craft which has got trade-offs attached to each strategy, and there are multiple ways to develop software. The various pattern catalogues brought some conceptual unification for this cacophony in software development. Most developers around the world today who are worth their salt, can understand and speak this language. We believe you will be able to do the same at the end of the article. Fancy yourself stating the following about your recent implementation: For our tax computation example, we have used command pattern to handle the computation logic. The commands (handlers) are configured using an XML file, and a factory method takes care of the instantiation of classes on the fly using Lazy loading. We cache the commands, and avoid instantiation of more objects by imposing singleton constraints on the invocation. We support prototype pattern where command objects can be cloned. The command objects have a base implementation, where concrete command objects use the template method pattern to override methods which are necessary. The command objects are implemented using the design by contracts idiom. The whole mechanism is encapsulated using a Façade class, which acts as an API layer for the application logic. The application logic uses entity objects (reference) to store the taxable entities, attributes like tax parameters are stored as value objects. We use data transfer object (DTO) to transfer the data from the application layer to the computational layer. Arlow/Nuestadt-based archetype pattern is the unit of structuring the tax computation logic. For some developers, the preceding language/platform-independent description of the software being developed is enough to understand the approach taken. This will boos developer productivity (during all phases of SDLC, including development, maintenance, and support) as the developers will be able to get a good mental model of the code base. Without Pattern catalogs, such succinct descriptions of the design or implementation would have been impossible. In an Agile software development scenario, we develop software in an iterative fashion. Once we reach a certain maturity in a module, developers refactor their code. While refactoring a module, patterns do help in organizing the logic. The case study given next will help you to understand the rationale behind "Patterns as refactoring targets". APIs based on well-defined patterns are easy to use and impose less cognitive load on programmers. The success of the ASP.NET MVC framework, NHibernate, and API's for writing HTTP modules and handlers in the ASP.NET pipeline are a few testimonies to the process. Personal income tax computation - A case study Rather than explaining the advantages of patterns, the following example will help us to see things in action. Computation of the annual income tax is a well-known problem domain across the globe. We have chosen an application domain which is well known to focus on the software development issues. The application should receive inputs regarding the demographic profile (UID, Name, Age, Sex, Location) of a citizen and the income details (Basic, DA, HRA, CESS, Deductions) to compute his tax liability. The System should have discriminants based on the demographic profile, and have a separate logic for senior citizens, juveniles, disabled people, old females, and others. By discriminant we mean demographic that parameters like age, sex and location should determine the category to which a person belongs and apply category-specific computation for that individual. As a first iteration, we will implement logic for the senior citizen and ordinary citizen category. After preliminary discussion, our developer created a prototype screen as shown in the following image: Archetypes and business archetype pattern The legendary Swiss psychologist, Carl Gustav Jung, created the concept of archetypes to explain fundamental entities which arise from a common repository of human experiences. The concept of archetypes percolated to the software industry from psychology. The Arlow/Nuestadt patterns describe business archetype patterns like Party, Customer Call, Product, Money, Unit, Inventory, and so on. An Example is the Apache Maven archetype, which helps us to generate projects of different natures like J2EE apps, Eclipse plugins, OSGI projects, and so on. The Microsoft patterns and practices describes archetypes for targeting builds like Web applications, rich client application, mobile applications, and services applications. Various domain-specific archetypes can exist in respective contexts as organizing and structuring mechanisms. In our case, we will define some archetypes which are common in the taxation domain. Some of the key archetypes in this domain are: Sr.no Archetype Description 1 SeniorCitizenFemale Tax payers who are female, and above the age of 60 years 2 SeniorCitizen Tax payers who are male, and above the age of 60 years 3 OrdinaryCitizen Tax payers who are Male/Female, and above 18 years of age 3 DisabledCitizen Tax payers who have any disability 4 MilitaryPersonnel Tax payers who are military personnel 5 Juveniles Tax payers whose age is less than 18 years We will use demographic parameters as discriminant to find the archetype which corresponds to the entity. The whole idea of inducing archetypes is to organize the tax computation logic around them. Once we are able to resolve the archetypes, it is easy to locate and delegate the computations corresponding to the archetypes. Entity, value, and data transfer objects We are going to create a class which represents a citizen. Since citizen needs to be uniquely identified, we are going to create an entity object, which is also called reference object (from DDD catalog). The universal identifier (UID) of an entity object is the handle which an application refers. Entity objects are not identified by their attributes, as there can be two people with the same name. The ID uniquely identifies an entity object. The definition of an entity object is given as follows: public class TaxableEntity { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } public char Sex { get; set; } public string Location { get; set; } public TaxParamVO taxparams { get; set; } } In the preceding class definition, Id uniquely identifies the entity object. TaxParams is a value object (from DDD catalog) associated with the entity object. Value objects do not have a conceptual identity. They describe some attributes of things (entities). The definition of TaxParams is given as follows: public class TaxParamVO { public double Basic {get;set;} public double DA { get; set; } public double HRA { get; set; } public double Allowance { get; set; } public double Deductions { get; set; } public double Cess { get; set; } public double TaxLiability { get; set; } public bool Computed { get; set; } } While writing applications ever since Smalltalk, Model-view-controller (MVC) is the most dominant paradigm for structuring applications. The application is split into a model layer ( which mostly deals with data), view layer (which acts as a display layer), and a controller (to mediate between the two). In the Web development scenario, they are physically partitioned across machines. To transfer data between layers, the J2EE pattern catalog identified the DTO to transfer data between layers. The DTO object is defined as follows: public class TaxDTO { public int id { } public TaxParamVO taxparams { } } If the layering exists within the same process, we can transfer these objects as-is. If layers are partitioned across processes or systems, we can use XML or JSON serialization to transfer objects between the layers. A computation engine We need to separate UI processing, input validation, and computation to create a solution which can be extended to handle additional requirements. The computation engine will execute different logic depending upon the command received. The GoF command pattern is leveraged for executing the logic based on the command received. The command pattern consists of four constituents. They are: Command object Parameters Command Dispatcher Client The command object's interface has an Execute method. The parameters to the command objects are passed through a bag. The client invokes the command object by passing the parameters through a bag to be consumed by the Command Dispatcher. The Parameters are passed to the command object through the following data structure: public class COMPUTATION_CONTEXT { private Dictionary<String, Object> symbols = new Dictionary<String, Object>(); public void Put(string k, Object value) { symbols.Add(k, value); } public Object Get(string k) { return symbols[k]; } } The ComputationCommand interface, which all the command objects implement, has only one Execute method, which is shown next. The Execute method takes a bag as parameter. The COMPUTATION_CONTEXT data structure acts as the bag here. Interface ComputationCommand { bool Execute(COMPUTATION_CONTEXT ctx); } Since we have already implemented a command interface and bag to transfer the parameters, it is time that we implement a command object. For the sake of simplicity, we will implement two commands where we hardcode the tax liability. public class SeniorCitizenCommand : ComputationCommand { public bool Execute(COMPUTATION_CONTEXT ctx) { TaxDTO td = (TaxDTO)ctx.Get("tax_cargo"); //---- Instead of computation, we are assigning //---- constant tax for each arcetypes td.taxparams.TaxLiability = 1000; td.taxparams.Computed = true; return true; } } public class OrdinaryCitizenCommand : ComputationCommand { public bool Execute(COMPUTATION_CONTEXT ctx) { TaxDTO td = (TaxDTO)ctx.Get("tax_cargo"); //---- Instead of computation, we are assigning //---- constant tax for each arcetypes td.taxparams.TaxLiability = 1500; td.taxparams.Computed = true; return true; } } The commands will be invoked by a CommandDispatcher Object, which takes an archetype string and a COMPUTATION_CONTEXT object. The CommandDispatcher acts as an API layer for the application. class CommandDispatcher { public static bool Dispatch(string archetype, COMPUTATION_CONTEXT ctx) { if (archetype == "SeniorCitizen") { SeniorCitizenCommand cmd = new SeniorCitizenCommand(); return cmd.Execute(ctx); } else if (archetype == "OrdinaryCitizen") { OrdinaryCitizenCommand cmd = new OrdinaryCitizenCommand(); return cmd.Execute(ctx); } else { return false; } } } The application to engine communication The data from the application UI, be it Web or Desktop, has to flow to the computation engine. The following ViewHandler routine shows how data, retrieved from the application UI, is passed to the engine, via the Command Dispatcher, by a client: public static void ViewHandler(TaxCalcForm tf) { TaxableEntity te = GetEntityFromUI(tf); if (te == null){ ShowError(); return; } string archetype = ComputeArchetype(te); COMPUTATION_CONTEXT ctx = new COMPUTATION_CONTEXT(); TaxDTO td = new TaxDTO { id = te.id, taxparams = te.taxparams}; ctx.Put("tax_cargo",td); bool rs = CommandDispatcher.Dispatch(archetype, ctx); if ( rs ) { TaxDTO temp = (TaxDTO)ctx.Get("tax_cargo"); tf.Liabilitytxt.Text = Convert.ToString(temp.taxparams.TaxLiability); tf.Refresh(); } } At this point, imagine that a change in requirement has been received from the stakeholders. Now, we need to support tax computation for new categories. Initially, we had different computations for senior citizen and ordinary citizen. Now we need to add new Archetypes. At the same time, to make the software extensible (loosely coupled) and maintainable, it would be ideal if we provide the capability to support new Archetypes in a configurable manner as opposed to recompiling the application for every new archetype owing to concrete references. The Command Dispatcher object does not scale well to handle additional archetypes. We need to change the assembly whenever a new archetype is included, as the tax computation logic varies for each archetype. We need to create a pluggable architecture to add or remove archetypes at will. The plugin system to make system extensible Writing system logic without impacting the application warrants a mechanism—that of loading a class on the fly. Luckily, the .NET Reflection API provides a mechanism for one to load a class during runtime, and invoke methods within it. A developer worth his salt should learn the Reflection API to write systems which change dynamically. In fact, most of the technologies like ASP.NET, Entity framework, .NET Remoting, and WCF work because of the availability of Reflection API in the .NET stack. Henceforth, we will be using an XML configuration file to specify our tax computation logic. A sample XML file is given next: <?xml version="1.0"?> <plugins> <plugin archetype ="OrindaryCitizen" command="TaxEngine.OrdinaryCitizenCommand"/> <plugin archetype="SeniorCitizen" command="TaxEngine.SeniorCitizenCommand"/> </plugins> The contents of the XML file can be read very easily using LINQ to XML. We will be generating a Dictionary object by the following code snippet: private Dictionary<string,string> LoadData(string xmlfile) { return XDocument.Load(xmlfile) .Descendants("plugins") .Descendants("plugin") .ToDictionary(p => p.Attribute("archetype").Value, p => p.Attribute("command").Value); } Summary In this article, we have covered quite a lot of ground in understanding why pattern-oriented software development is a good way to develop modern software. We started the article citing some key principles. We progressed further to demonstrate the applicability of these key principles by iteratively skinning an application which is extensible and resilient to changes. Resources for Article: Further resources on this subject: Debugging Your .NET Application [article] JSON with JSON.Net [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 9439
article-image-the-seven-deadly-sins-of-web-design
Guest Contributor
13 Mar 2019
7 min read
Save for later

The seven deadly sins of web design

Guest Contributor
13 Mar 2019
7 min read
Just 30 days before the debut of "Captain Marvel," the latest cinematic offering by the successful and prolific Marvel Studios, a delightful and nostalgia-filled website was unveiled to promote the movie. Since the story of "Captain Marvel" is set in the 1990s, the brilliant minds at the marketing department of Marvel Studios decided to design a website with the right look and feel, which in this case meant using FrontPage and hosting on Angelfire. The "Captain Marvel" promo website is filled with the typography, iconography, glitter, and crudely animated GIFs you would expect from a 1990s creation, including a guestbook, hidden easter eggs, flaming borders, hit counter, and even headers made with Microsoft WordArt. (Image courtesy of Marvel) The site is delightful not just for the dead-on nostalgia trip it provides to visitors, but also because it is very well developed. This is a site with a lot to explore, and it is clearly evident that the website developers met client demands while at the same time thinking about users. This site may look and feel like it was made during the GeoCities era, but it does not make any of the following seven mistakes: Sin #1: Non-Responsiveness In 2019, it is simply inconceivable to think of a web development firm that neglects to make a responsive site. Since 2016, internet traffic flowing through mobile devices has been higher than the traffic originating from desktops and laptops. Current rates are about 53 percent smartphones and tablets versus 47 percent desktops, laptops, kiosks, and smart TVs. Failure to develop responsive websites means potentially alienating more than 50 percent of prospective visitors. As for the "Captain Marvel" website, it is amazingly responsive when considering that internet users in the 1990s barely dreamed about the day when they would be able to access the web from handheld devices (mobile phones were yet to be mass distributed back then). Sin #2: Way too much Jargon (Image courtesy of the Botanical Linguist) Not all website developers have a good sense of readability, and this is something that often shows up when completed projects result in product visitors struggling to comprehend. We’re talking about jargon. There’s a lot of it online, not only in the usual places like the privacy policy and terms of service sections but sometimes in content too. Regardless of how jargon creeps onto your website, it should be rooted out. The "Captain Marvel" website features legal notices written by The Walt Disney Company, and they are very reader-friendly with minimal jargon. The best way to handle jargon is to avoid it as much as possible unless the business developer has good reasons to include it. Sin #3: A noticeable lack of content No content means no message, and this is the reason 46 percent of visitors who land on B2B websites end up leaving without further exploration or interaction. Quality content that is relevant to the intention of a website is crucial in terms of establishing credibility, and this goes beyond B2B websites. In the case of "Captain Marvel," the amount of content is reduced to match the retro sensibility, but there are enough photos, film trailers, character bios, and games to keep visitors entertained. Modern website development firms that provide full-service solutions can either provide or advise clients on the content they need to get started. Furthermore, they can also offer lessons on how to operate content management systems. Sin #4: Making essential information hard to find There was a time when the "mystery meat navigation” issue of website development was thought to have been eradicated through the judicious application of recommended practices, but then mobile apps came around. Even technology giant Google fell victim to mystery meat navigation with its 2016 release of Material Design, which introduced bottom navigation bars intended to offer a more clarifying alternative to hamburger menus. Unless there is a clever purpose for prompting visitors to click or tap on a button, link or page element, that does not explain next steps, mystery meat navigation should be avoided, particularly when it comes to essential information. When the 1990s "Captain Marvel" page loads, visitors can click or tap on labeled links to get information about the film, enjoy multimedia content, play games, interact with the guestbook, or get tickets. There is a mysterious old woman that pops up every now and then from the edges of the screen, but the reason behind this mysterious element is explained in the information section. Sin #5: Website loads too slow (Image courtesy of Horton Marketing Solutions) There is an anachronism related to the "Captain Marvel" website that users who actually used Netscape in the 1990s will notice: all pages load very fast. This is one retro aspect that Marvel Studios decided to not include on this site, and it makes perfect sense. For a fast-loading site, a web design rule of thumb is to simplify and this responsibility lies squarely with the developer. It stands to reason that the more “stuff” you have on a page (images, forms, videos, widgets, shiny things), the longer it takes the server to send over the site files and the longer it takes the browser to render them. Here are a few design best practices to keep in mind: 1 Make the site light - get rid of non-essential elements, especially if they are bandwidth-sucking images or video. 2 Compress your pages - it’s easy with Gzip. 3 Split long pages into several shorter ones 4 Write clean code that doesn’t rely on external sources 5 Optimize images For more web design tips that help your site load in the sub-three second range, like Google expects in 2019, check out our article on current design trends.   Once you have design issues under control, investigate your web host. They aren’t all created equal. Cheap, entry-level shared packages are notoriously slow and unpredictable, especially as your traffic increases. But even beyond that, the reality is that some companies spend money buying better, faster servers and don’t overload them with too many clients. Some do. Recent testing from review site HostingCanada.org checked load times across the leading providers and found variances from a ‘meh’ 2,850 ms all the way down to speedy 226 ms. With pricing amongst credible competitors roughly equal, web developers should know which hosts are the fastest and point clients in that direction. Sin #6: Outdated information Functional and accurate information will always triumph over form. The "Captain Marvel" website is garish to look at by 2019 standards, but all the information is current. The film's theater release date is clearly displayed, and should something happen that would require this date to change, you can be sure that Marvel Studios will fire up FrontPage to promptly make the adjustment. Sin #7: No clear call to action Every website should compel visitors to do something. Even if the purpose is to provide information, the call-to-action or CTA should encourage visitors to remember it and return for updates. The CTA should be as clear as the navigation elements, otherwise, the purpose of the visit is lost. Creating enticements is acceptable, but the CTA message should be explained nonetheless. In the case of "Captain Marvel," visitors can click on "Get Tickets" link to be taken to a Fandango.com page with geolocation redirection for their region. The Bottom Line In the end, the seven mistakes listed herein are easy to avoid. Whenever developers run into clients whose instructions may result in one of these mistakes, proper explanations should be given. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. 7 Web design trends and predictions for 2019 How to create a web designer resume that lands you a Job Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 9406

article-image-adding-real-time-functionality-using-socketio
Packt
22 Sep 2014
18 min read
Save for later

Adding Real-time Functionality Using Socket.io

Packt
22 Sep 2014
18 min read
In this article by Amos Q. Haviv, the author of MEAN Web Development, decribes how Socket.io enables Node.js developers to support real-time communication using WebSockets in modern browsers and legacy fallback protocols in older browsers. (For more resources related to this topic, see here.) Introducing WebSockets Modern web applications such as Facebook, Twitter, or Gmail are incorporating real-time capabilities, which enable the application to continuously present the user with recently updated information. Unlike traditional applications, in real-time applications the common roles of browser and server can be reversed since the server needs to update the browser with new data, regardless of the browser request state. This means that unlike the common HTTP behavior, the server won't wait for the browser's requests. Instead, it will send new data to the browser whenever this data becomes available. This reverse approach is often called Comet, a term coined by a web developer named Alex Russel back in 2006 (the term was a word play on the AJAX term; both Comet and AJAX are common household cleaners in the US). In the past, there were several ways to implement a Comet functionality using the HTTP protocol. The first and easiest way is XHR polling. In XHR polling, the browser makes periodic requests to the server. The server then returns an empty response unless it has new data to send back. Upon a new event, the server will return the new event data to the next polling request. While this works quite well for most browsers, this method has two problems. The most obvious one is that using this method generates a large number of requests that hit the server with no particular reason, since a lot of requests are returning empty. The second problem is that the update time depends on the request period. This means that new data will only get pushed to the browser on the next request, causing delays in updating the client state. To solve these issues, a better approach was introduced: XHR long polling. In XHR long polling, the browser makes an XHR request to the server, but a response is not sent back unless the server has a new data. Upon an event, the server responds with the event data and the browser makes a new long polling request. This cycle enables a better management of requests, since there is only a single request per session. Furthermore, the server can update the browser immediately with new information, without having to wait for the browser's next request. Because of its stability and usability, XHR long polling has become the standard approach for real-time applications and was implemented in various ways, including Forever iFrame, multipart XHR, JSONP long polling using script tags (for cross-domain, real-time support), and the common long-living XHR. However, all these approaches were actually hacks using the HTTP and XHR protocols in a way they were not meant to be used. With the rapid development of modern browsers and the increased adoption of the new HTML5 specifications, a new protocol emerged for implementing real-time communication: the full duplex WebSockets. In browsers that support the WebSockets protocol, the initial connection between the server and browser is made over HTTP and is called an HTTP handshake. Once the initial connection is made, the browser and server open a single ongoing communication channel over a TCP socket. Once the socket connection is established, it enables bidirectional communication between the browser and server. This enables both parties to send and retrieve messages over a single communication channel. This also helps to lower server load, decrease message latency, and unify PUSH communication using a standalone connection. However, WebSockets still suffer from two major problems. First and foremost is browser compatibility. The WebSockets specification is fairly new, so older browsers don't support it, and though most modern browsers now implement the protocol, a large group of users are still using these older browsers. The second problem is HTTP proxies, firewalls, and hosting providers. Since WebSockets use a different communication protocol than HTTP, a lot of these intermediaries don't support it yet and block any socket communication. As it has always been with the Web, developers are left with a fragmentation problem, which can only be solved using an abstraction library that optimizes usability by switching between protocols according to the available resources. Fortunately, a popular library called Socket.io was already developed for this purpose, and it is freely available for the Node.js developer community. Introducing Socket.io Created in 2010 by JavaScript developer, Guillermo Rauch, Socket.io aimed to abstract Node.js' real-time application development. Since then, it has evolved dramatically, released in nine major versions before being broken in its latest version into two different modules: Engine.io and Socket.io. Previous versions of Socket.io were criticized for being unstable, since they first tried to establish the most advanced connection mechanisms and then fallback to more primitive protocols. This caused serious issues with using Socket.io in production environments and posed a threat to the adoption of Socket.io as a real-time library. To solve this, the Socket.io team redesigned it and wrapped the core functionality in a base module called Engine.io. The idea behind Engine.io was to create a more stable real-time module, which first opens a long-polling XHR communication and then tries to upgrade the connection to a WebSockets channel. The new version of Socket.io uses the Engine.io module and provides the developer with various features such as events, rooms, and automatic connection recovery, which you would otherwise implement by yourself. In this article's examples, we will use the new Socket.io 1.0, which is the first version to use the Engine.io module. Older versions of Socket.io prior to Version 1.0 are not using the new Engine.io module and therefore are much less stable in production environments. When you include the Socket.io module, it provides you with two objects: a socket server object that is responsible for the server functionality and a socket client object that handles the browser's functionality. We'll begin by examining the server object. The Socket.io server object The Socket.io server object is where it all begins. You start by requiring the Socket.io module, and then use it to create a new Socket.io server instance that will interact with socket clients. The server object supports both a standalone implementation and the ability to use it in conjunction with the Express framework. The server instance then exposes a set of methods that allow you to manage the Socket.io server operations. Once the server object is initialized, it will also be responsible for serving the socket client JavaScript file for the browser. A simple implementation of the standalone Socket.io server will look as follows: var io = require('socket.io')();io.on('connection', function(socket){ /* ... */ });io.listen(3000); This will open a Socket.io over the 3000 port and serve the socket client file at the URL http://localhost:3000/socket.io/socket.io.js. Implementing the Socket.io server in conjunction with an Express application will be a bit different: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){ /* ... */ });server.listen(3000); This time, you first use the http module of Node.js to create a server and wrap the Express application. The server object is then passed to the Socket.io module and serves both the Express application and the Socket.io server. Once the server is running, it will be available for socket clients to connect. A client trying to establish a connection with the Socket.io server will start by initiating the handshaking process. Socket.io handshaking When a client wants to connect the Socket.io server, it will first send a handshake HTTP request. The server will then analyze the request to gather the necessary information for ongoing communication. It will then look for configuration middleware that is registered with the server and execute it before firing the connection event. When the client is successfully connected to the server, the connection event listener is executed, exposing a new socket instance. Once the handshaking process is over, the client is connected to the server and all communication with it is handled through the socket instance object. For example, handling a client's disconnection event will be as follows: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); Notice how the socket.on() method adds an event handler to the disconnection event. Although the disconnection event is a predefined event, this approach works the same for custom events as well, as you will see in the following sections. While the handshake mechanism is fully automatic, Socket.io does provide you with a way to intercept the handshake process using a configuration middleware. The Socket.io configuration middleware Although the Socket.io configuration middleware existed in previous versions, in the new version it is even simpler and allows you to manipulate socket communication before the handshake actually occurs. To create a configuration middleware, you will need to use the server's use() method, which is very similar to the Express application's use() method: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.use(function(socket, next) {/* ... */next(null, true);});io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); As you can see, the io.use() method callback accepts two arguments: the socket object and a next callback. The socket object is the same socket object that will be used for the connection and it holds some connection properties. One important property is the socket.request property, which represents the handshake HTTP request. In the following sections, you will use the handshake request to incorporate the Passport session with the Socket.io connection. The next argument is a callback method that accepts two arguments: an error object and Boolean value. The next callback tells Socket.io whether or not to proceed with the handshake process, so if you pass an error object or a false value to the next method, Socket.io will not initiate the socket connection. Now that you have a basic understanding of how handshaking works, it is time to discuss the Socket.io client object. The Socket.io client object The Socket.io client object is responsible for the implementation of the browser socket communication with the Socket.io server. You start by including the Socket.io client JavaScript file, which is served by the Socket.io server. The Socket.io JavaScript file exposes an io() method that connects to the Socket.io server and creates the client socket object. A simple implementation of the socket client will be as follows: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('connect', function() {   /* ... */});</script> Notice the default URL for the Socket.io client object. Although this can be altered, you can usually leave it like this and just include the file from the default Socket.io path. Another thing you should notice is that the io() method will automatically try to connect to the default base path when executed with no arguments; however, you can also pass a different server URL as an argument. As you can see, the socket client is much easier to implement, so we can move on to discuss how Socket.io handles real-time communication using events. Socket.io events To handle the communication between the client and the server, Socket.io uses a structure that mimics the WebSockets protocol and fires events messages across the server and client objects. There are two types of events: system events, which indicate the socket connection status, and custom events, which you'll use to implement your business logic. The system events on the socket server are as follows: io.on('connection', ...): This is emitted when a new socket is connected socket.on('message', ...): This is emitted when a message is sent using the socket.send() method socket.on('disconnect', ...): This is emitted when the socket is disconnected The system events on the client are as follows: socket.io.on('open', ...): This is emitted when the socket client opens a connection with the server socket.io.on('connect', ...): This is emitted when the socket client is connected to the server socket.io.on('connect_timeout', ...): This is emitted when the socket client connection with the server is timed out socket.io.on('connect_error', ...): This is emitted when the socket client fails to connect with the server socket.io.on('reconnect_attempt', ...): This is emitted when the socket client tries to reconnect with the server socket.io.on('reconnect', ...): This is emitted when the socket client is reconnected to the server socket.io.on('reconnect_error', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('reconnect_failed', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('close', ...): This is emitted when the socket client closes the connection with the server Handling events While system events are helping us with connection management, the real magic of Socket.io relies on using custom events. In order to do so, Socket.io exposes two methods, both on the client and server objects. The first method is the on() method, which binds event handlers with events and the second method is the emit() method, which is used to fire events between the server and client objects. An implementation of the on() method on the socket server is very simple: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In the preceding code, you bound an event listener to the customEvent event. The event handler is being called when the socket client object emits the customEvent event. Notice how the event handler accepts the customEventData argument that is passed to the event handler from the socket client object. An implementation of the on() method on the socket client is also straightforward: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('customEvent', function(customEventData) {   /* ... */});</script> This time the event handler is being called when the socket server emits the customEvent event that sends customEventData to the socket client event handler. Once you set your event handlers, you can use the emit() method to send events from the socket server to the socket client and vice versa. Emitting events On the socket server, the emit() method is used to send events to a single socket client or a group of connected socket clients. The emit() method can be called from the connected socket object, which will send the event to a single socket client, as follows: io.on('connection', function(socket){socket.emit('customEvent', customEventData);}); The emit() method can also be called from the io object, which will send the event to all connected socket clients, as follows: io.on('connection', function(socket){io.emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients except from the sender using the broadcast property, as shown in the following lines of code: io.on('connection', function(socket){socket.broadcast.emit('customEvent', customEventData);}); On the socket client, things are much simpler. Since the socket client is only connected to the socket server, the emit() method will only send the event to the socket server: var socket = io();socket.emit('customEvent', customEventData); Although these methods allow you to switch between personal and global events, they still lack the ability to send events to a group of connected socket clients. Socket.io offers two options to group sockets together: namespaces and rooms. Socket.io namespaces In order to easily control socket management, Socket.io allow developers to split socket connections according to their purpose using namespaces. So instead of creating different socket servers for different connections, you can just use the same server to create different connection endpoints. This means that socket communication can be divided into groups, which will then be handled separately. Socket.io server namespaces To create a socket server namespace, you will need to use the socket server of() method that returns a socket namespace. Once you retain the socket namespace, you can just use it the same way you use the socket server object: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.of('/someNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});io.of('/someOtherNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In fact, when you use the io object, Socket.io actually uses a default empty namespace as follows: io.on('connection', function(socket){/* ... */}); The preceding lines of code are actually equivalent to this: io.of('').on('connection', function(socket){/* ... */}); Socket.io client namespaces On the socket client, the implementation is a little different: <script src="/socket.io/socket.io.js"></script><script>var someSocket = io('/someNamespace');someSocket.on('customEvent', function(customEventData) {   /* ... */});var someOtherSocket = io('/someOtherNamespace');someOtherSocket.on('customEvent', function(customEventData) {   /* ... */});</script> As you can see, you can use multiple namespaces on the same application without much effort. However, once sockets are connected to different namespaces, you will not be able to send an event to all these namespaces at once. This means that namespaces are not very good for a more dynamic grouping logic. For this purpose, Socket.io offers a different feature called rooms. Socket.io rooms Socket.io rooms allow you to partition connected sockets into different groups in a dynamic way. Connected sockets can join and leave rooms, and Socket.io provides you with a clean interface to manage rooms and emit events to the subset of sockets in a room. The rooms functionality is handled solely on the socket server but can easily be exposed to the socket client. Joining and leaving rooms Joining a room is handled using the socket join() method, while leaving a room is handled using the leave() method. So, a simple subscription mechanism can be implemented as follows: io.on('connection', function(socket) {   socket.on('join', function(roomData) {       socket.join(roomData.roomName);   })   socket.on('leave', function(roomData) {       socket.leave(roomData.roomName);   })}); Notice that the join() and leave() methods both take the room name as the first argument. Emitting events to rooms To emit events to all the sockets in a room, you will need to use the in() method. So, emitting an event to all socket clients who joined a room is quite simple and can be achieved with the help of the following code snippets: io.on('connection', function(socket){   io.in('someRoom').emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients in a room except the sender by using the broadcast property and the to() method: io.on('connection', function(socket){   socket.broadcast.to('someRoom').emit('customEvent', customEventData);}); This pretty much covers the simple yet powerful room functionality of Socket.io. In the next section, you will learn how implement Socket.io in your MEAN application, and more importantly, how to use the Passport session to identify users in the Socket.io session. While we covered most of Socket.io features, you can learn more about Socket.io by visiting the official project page at https://socket.io. Summary In this article, you learned how the Socket.io module works. You went over the key features of Socket.io and learned how the server and client communicate. You configured your Socket.io server and learned how to integrate it with your Express application. You also used the Socket.io handshake configuration to integrate the Passport session. In the end, you built a fully functional chat example and learned how to wrap the Socket.io client with an AngularJS service. Resources for Article: Further resources on this subject: Creating a RESTful API [article] Angular Zen [article] Digging into the Architecture [article]
Read more
  • 0
  • 0
  • 9396