Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-magic-leaps-first-mixed-reality-headset-powered-by-nvidia-tegra-x2-is-coming-this-summer
Richard Gall
16 Jul 2018
2 min read
Save for later

Magic Leap's first augmented reality headset, powered by Nvidia Tegra X2, is coming this Summer

Richard Gall
16 Jul 2018
2 min read
The Magic Leap One - the first augmented reality headset produced by startup Magic Leap - is slated to be launched this Summer. The news closely follows an announcement by AT&T that it will be the sole carrier for the headset. However, although Magic Leap revealed a lot about how the Magic Leap One in a live stream on Friday (13 July), no official release date has been stated. What we learned from the Magic Leap One livestream The Magic Leap One livestream gave us a pretty neat insight into how the virtual reality headset is going to work. It showed us how gestures form the main part of the UX with a demo of a rock-throwing game called 'Dodge'. The visuals looked pretty exciting, and it seems like Magic Leap have done a good job of developing a headset that could be game-changing for the augmented reality industry. However, there were a few holes - one Twitter user noticed, for example, that the software at one point failed to recognize when a user's hand would have been blocking the virtual reality images. https://twitter.com/ID_R_McGregor/status/1017119982906494983 For those with a particular interest in the engineering that has gone into the headset, we also found out that the Magic Leap One runs on an Nvidia Tegra X2 processor. This makes it considerably more powerful than the processor that is helping to power the current Nintendo Switch console, which uses the Tegra X1.
Read more
  • 0
  • 0
  • 3029

article-image-dcleaks-and-guccifer-2-0-how-hackers-used-social-engineering-to-manipulate-the-2016-u-s-elections
Savia Lobo
16 Jul 2018
5 min read
Save for later

DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections

Savia Lobo
16 Jul 2018
5 min read
It’s been more than a year since the Republican party’s Donald Trump won the U.S Presidential elections against Democrat Hillary Clinton. However, Robert Mueller recently indicted 12 Russian military officers who meddled with the 2016 U.S. Presidential elections. These had hacked into the Democratic National Committee, the Democratic Congressional Campaign Committee, and the Clinton campaign. Mueller evoked that the Russians did it using Guccifer 2.0 and DCLeaks following which Twitter decided to suspend both the accounts on its platform. According to the San Diego Union-Tribune, Twitter found the handles of both Guccifer 2.0 and DCLeaks dormant for more than a year and a half. It also verified the accounts being loaded with disseminated emails which were stolen from both Clinton’s camp and the Democratic Party’s organization. Subsequently, it has suspended both accounts. Guccifer 2.0 is an online persona created by the conspirators and was falsely claimed to be a Romanian hacker to avoid evidence of Russian involvement in the 2016 U.S. presidential elections. They are also associated with the leaks of documents from the Democratic National Committee(DNC) through Wikileaks. DCLeaks, with the website, “dcleaks.com” published the stolen data. The DCLeaks site is known to be a front for Russian cyberespionage group Fancy Bear. Mueller, in his indictment, stated that both Guccifer 2.0 and DCLeaks have ties with the GRU (Russian military intelligence unit called the Main Intelligence Directorate) hackers. How hackers social engineered their way into the Clinton Campaign? The attackers or hackers have been said to use the hacking technique known as Spear phishing and the malware used in the process is the X-agent. It is a tool that can collect documents, keystrokes and other information from computers and smartphones through encrypted channels back to servers owned by hackers. As per Mueller’s indictment report, the conspirator created an email account on the name of a known member of the Clinton Campaign. This email id had one letter deviation and looked almost like the original one. Spear Phishing emails were sent across work accounts of more than thirty different Clinton Campaign employees using this fake account. The embedded link within these spear fished emails directed the recipient to a document titled ‘hillary-clinton-favorable-rating.xlsx.’. However, in reality, the recipients were being directed to a GRU-created website. X-agent uses an encrypted “tunneling protocol” tool known as X-Tunnel, that connected to known GRU-associated servers. Using the malware, X-agent, the GRU agents had targeted more than 300 individuals within the Clinton campaign, Democratic National Committee, and Democratic Congressional Campaign Committee, by March 2016.  At the same time, hackers stole nearly 50,000 emails from the Clinton campaign, and by June 2016 they had gained a control over 33 DNC computers and infected them with the malware. The indictment further explains that although the attackers ensured to hide their tracks by erasing the activity logs, the Linux-based version of X-agent programmed with the GRU_registered domain “linuxkrnl.net” were discovered in these networks. Was the Trump campaign impervious to such an attack? Roger Stone, Former Trump campaign adviser was said to be in contact with Guccifer 2.0 during the presidential campaign. As per Mueller ’s indictment, Guccifer 2.0 sent Stone this message, “Please tell me if i can help u anyhow...it would be a great pleasure for me...What do u think about the info on the turnout model for the democrats entire presidential campaign.”. “Pretty standard” was the reply given to Guccifer 2.0. However, Stone said that his conversations with the person behind the account were “innocuous.” Stone further added, “This exchange is entirely public and provides no evidence of collaboration or collusion with Guccifer 2.0 or anyone else in the alleged hacking of the DNC emails.” Stone also stated that he never discussed such innocuous communication with Trump or his presidential campaign. Rod Rosenstein, Deputy Attorney General, U.S  had indicated about this Russian attack since he was a part of Kremlin’s multifaceted approach to boost Trump’s 2016 campaign and on the other hand depreciating Clinton’s campaign. Twitter stated that both the accounts have been suspended for being connected to a network of accounts previously suspended for operating in violation of their rules. However, Hillary Clinton’s supporters expressed their anger by stating that Twitter’s responsiveness to stimuli was too slow and too little. With the midterm elections arriving soon in some months, one question on everyone’s mind is: how are the intelligence department and the department of justice going to ensure the elections are fair and secure? There are high chances of such attacks recurring. On being asked a similar question at a cybersecurity conference, Kirstjen Nielsen, Homeland Security Secretary responded, “Today I can say with confidence that we know whom to contact in every state to share threat information,” Nielsen said. “That ability did not exist in 2016.” Following is Rod Rosenstein’s interview with PBS NewsHour.   Social engineering attacks – things to watch out while online! YouTube has a $25 million plan to counter fake news and misinformation Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news    
Read more
  • 0
  • 0
  • 5192

article-image-the-software-behind-silicon-valley-emmy-nominated-not-hotdog-app
Sugandha Lahoti
16 Jul 2018
4 min read
Save for later

The software behind Silicon Valley’s Emmy-nominated 'Not Hotdog' app

Sugandha Lahoti
16 Jul 2018
4 min read
This is a great news for all Silicon Valley Fans. The amazing Not Hotdog A.I. app shown on season 4’s 4th episode, has been nominated for a Primetime Emmy Award. The Emmys has placed Silicon Valley and the app in the category “Outstanding Creative Achievement In Interactive Media Within a Scripted Program” among other popular shows. Other nominations include 13 Reasons Why for “Talk To The Reasons”, a website that lets you chat with the characters. Rick and Morty, for “Virtual Rick-ality”, a virtual reality game. Mr. Robot, for "Ecoin", a fictional Global Digital Currency. And Westworld for “Chaos Takes Control Interactive Experience”, an online experience for promoting the show’s second season. Within a day of its launch, the ‘Not Hotdog’ application was trending on the App Store and on Twitter, grabbing the #1 spot on both Hacker News & Product Hunt, and won a Webby for Best Use of Machine Learning. The app uses state-of-the-art deep learning, with a mix of React Native, Tensorflow & Keras. It has averaged 99.9% crash-free users with a 4.5+/5 rating on the app stores. The ‘Not Hotdog’ app does what the name suggests. It identifies hotdogs — and not hot dogs. It is available for both Android and iOS devices whose description reads “What would you say if I told you there is an app on the market that tell you if you have a hotdog or not a hotdog. It is very good and I do not want to work on it any more. You can hire someone else.” How the Not Hotdog app is built The creator Tim Anglade uses sophisticated neural architecture for the Silicon Valley A.I. app that runs directly on your phone and trained it with Tensorflow, Keras & Nvidia GPUs. Of course, the use case is not very useful, but the overall app is a substantial example of deep learning and edge computing in pop culture.  The app provides better privacy as images never leave a user’s device. Consequently, users are provided with a faster experience and offline availability as processing doesn’t go to the cloud. Using a no cloud-based AI approach means that the company can run the app at zero cost, providing significant savings, even under a load of millions of users. What is amazing about the app is that it was built by a single creator with limited resources ( a single laptop and GPU, using hand-curated data). This talks lengths of how much can be achieved even with a limited amount of time and resources, by non-technical companies, individual developers, and hobbyists alike. The initial prototype of the app was built using Google Cloud Platform’s Vision API, and React Native. React Native is a good choice as it supports many devices. The Google Cloud’s Vision API, however, was quickly abandoned. Instead, what was brought into the picture was Edge Computing.  It enabled training the neural network directly on the laptop, to be exported and embedded directly into the mobile app, making the neural network execution phase run directly inside the user’s phone. How TensorFlow powers the Not Hotdog app After React Native, the second part of their tech stack was TensorFlow. They used the TensorFlow’s Transfer Learning script, to retrain the Inception architecture which helps in dealing with a more specific image problem. Transfer Learning helped them get better results much faster, and with less data compared to training from scratch. Inception turned out too big to be retrained. So, at the suggestion of Jeremy P. Howard, they explored and settled down on SqueezeNet.  It provided explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub. The final architecture was largely based on Google’s MobileNets paper, which provided their neural architecture with Inception-like accuracy on simple problems, with only almost 4M parameters. YouTube has a $25 million plan to counter fake news and misinformation Microsoft’s Brad Smith calls for facial recognition technology to be regulated Too weird for Wall Street: Broadcom’s value drops after purchasing CA Technologies
Read more
  • 0
  • 0
  • 4304

article-image-xamarin-test-cloud-for-api-monitoring-tutorial
Gebin George
16 Jul 2018
7 min read
Save for later

Xamarin Test Cloud for API Monitoring [Tutorial]

Gebin George
16 Jul 2018
7 min read
Xamarin Test Cloud can help us identify applications' functionality-related issues on real devices. It is a great source of application monitoring in terms of testing on different mobile devices and with different versions of operating systems. Getting a detailed analysis of various applications' functions is very important to make sure our application is running as expected on our target devices. With that being said, it is also critical to the application to be able to run on different operating system versions, and to analyze how it performs and how much memory usage it has. In this mobile DevOps tutorial, we will discuss how to use Xamarin Test Cloud and the analytics after running an application on different sets of devices. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. We will be using two different applications here to see the monitoring analytics and compare them, to get a better understanding of how this helps us identify various performance and functionality-related issues in our application. Below are the applications we will be using: PhoneCallApp Xamarin Store PhoneCallApp Let's go through some steps to see how to monitor our PhoneCallApp: Go to https://testcloud.xamarin.com/. Click on the PhoneCallApp icon to get to the details of the test runs: On the next page, you'll see a list of tests run for the application: Now, because we have only run one test so far, Test Cloud does not provide us with the graphical metrics shown in the preceding screenshot. In other examples we'll see next, you'll be able to see a more detailed comparison of different test runs. Click on the test run from the list to see its results: The test run listed is the one we ran earlier in previous chapters and uploaded from our machine to Xamarin Test Cloud using the command line. To get an idea of this interface, let's have a look at different parts of Xamarin Test Cloud's interface. Now, this is an overview screen that shows a summary of all the tests run for this application: This screen shows summary details, such as how many tests failed from the total number of tests run, how many times the app ran on a device, how many devices these tests were run on, and much more. This screen is very useful to get a brief idea when you want to get a report on how your application is doing on different devices and OS versions. The next thing you'll see in the left pane is the list of UITests included in the test run: This screen basically has a list of all the Xamarin.UITests that you included in your project. You can click on these different tests to see their respective results on the right side of the screen. Let's click on the test from the list in the preceding screen. This will take us to the next screen, which has detailed reports for the test run: Have a close look at the left pane on this screen. It gives us some steps of the test run on the device. These steps are only what we had written previously in the code to take a screenshot of every activity the test does. The steps are as mentioned (we are using the screens of the test code written in previous chapters here): App started: Take a screenshot when the app starts; this was written in the BeforeEachTest() method in the Tests.cs file: Call button pressed: This step is when the Xamarin.UITest presses the call button to make a call: Failed step (the assert): This is the last step and is shown to provide proof of the failed step, so you can see the outcome that we received and compare it with what was expected. This was the final assert that decides whether the test passes or not, based on the outcome in the Assert.IsTrue() condition. You can click on each of these steps in the left pane and analyze the screenshots taken to see exactly what went on during the test. This is a great way to see exactly what went wrong when the test failed. Now, sometimes the screenshots are not enough to identify the issue. For a more detailed analysis, Test Cloud also provides us with Device Log, as shown in the following screenshot: Device logs are a great way to see what's going on under the hood and get more detailed information about the application's behavior and how the device itself behaves when the application is run on it. This can help pinpoint the issues when a test fails on the device; logs are always a savior in that sort of scenario. Click on the Device Log and you can see step-by-step logs for each screenshot on the same screen: When a test fails, Test Cloud provides us with one more option, to see the Test Failures: It's very useful for automated test developers to see the exception information when a test fails. Last but not least, there is also a Test Log option, which can be used to get a consolidated log of the entire test run: Xamarin Store app Now that we have seen different options provided by Test Cloud to monitor our application and its functionality using test runs, let's see how the dashboard and tests look when we have multiple test runs on various physical devices with different OS versions. This will give us a better idea of how comparative monitoring can be done on Test Cloud to analyze an application's behavior on different devices, and compare them with one another. The Xamarin Store application is a sample application provided by Test Cloud on its platform to help understand the platform and get an idea of the dashboard. Let's go through the steps to understand how to monitor your application running on multiple devices, and how to compare different test runs: Go to the Test Cloud home page, just like in the previous example, and click on the Xamarin Store icon: On the next screen, you'll see a graphical representation of different test runs and brief information about how many tests failed of the total tests run, what the application size is, and its peak memory usage information during different test runs: This gives us a nice comparative look at how our application is performing on different test runs. It is possible that the application was performing fine during the first run, and then some code changes made some functionality fail. So, this graph is very useful to monitor a timeline of changes that affected application functionality. You can further click on the graph or the test run to see an overview of it. Now, this screen gives us a great view of how an application running on different devices can be monitored. It's a very nice way to keep track of the application on different devices and OS versions: Let's click on one of the steps to see the results of the step on multiple devices: The red icon shows failed tests. This page shows all the devices you chose to run the test on; it shows all the devices the test passed on, and shows a red flag on failed devices. You can further click on each device to get device-specific screens and logs. To summarize, we performed API monitoring efficiently using Xamarin Test Cloud. If you found this post useful, do check out the book Mobile DevOps, to deliver continuous integration and delivery for Mobile applications. API Gateway and its Need API and Intent-Driven Networking What is Azure API Management?
Read more
  • 0
  • 0
  • 2644

article-image-log-monitoring-tools-for-continuous-security-monitoring-policy-tutorial
Fatema Patrawala
13 Jul 2018
11 min read
Save for later

Log monitoring tools for continuous security monitoring policy [Tutorial]

Fatema Patrawala
13 Jul 2018
11 min read
How many times we have heard of organization's entire database being breached and downloaded by the hackers. The irony is, they are not even aware about anything until the hacker is selling the database details on the dark web after few months. Even though they implement decent security controls, what they lack is continuous security monitoring policy. It is one of the most common things that you might find in a startup or mid-sized organization. In this article, we will show how to choose the right log monitoring tool to implement continuous security monitoring policy. You are reading an excerpt from the book Enterprise Cloud Security and Governance, written by Zeal Vora. Log monitoring is a must in security Log monitoring is considered to be part of the de facto list of things that need to be implemented in an organization. It gives us the power of visibility of various events through a single central solution so we don't have to end up doing less or tail on every log file of every server. In the following screenshot, we have performed a new search with the keyword not authorized to perform and the log monitoring solution has shown us such events in a nice graphical way along with the actual logs, which span across days: Thus, if we want to see how many permission denied events occurred last week on Wednesday, this will be a 2-minute job if we have a central log monitoring solution with search functionality. This makes life much easier and would allow us to detect anomalies and attacks in a much faster than traditional approach. Choosing the right log monitoring tool This is a very important decision that needs to be taken by the organization. There are both commercial offerings as well as open source offerings that are available today but the amount of efforts that need to be taken in each of them varies a lot. I have seen many commercial offerings such as Splunk and ArcSight being used in large enterprises, including national level banks. On the contrary, there are also open source offerings, such as ELK Stack, that are gaining popularity especially after Filebeat got introduced. At a personal level, I really like Splunk but it gets very expensive when you have a lot of data being generated. This is one of the reasons why many startups or mid-sized organizations use commercial offering along with open source offerings such as ELK Stack. Having said that, we need to understand that if you decide to go with ELK Stack and have a large amount of data, then ideally you would need a dedicated person to manage it. Just to mention, AWS also has a basic level of log monitoring capability available with the help of CloudWatch. Let's get started with logging and monitoring There will always be many sources from which we need to monitor logs. Since it will be difficult to cover each and every individual source, we will talk about two primary ones, which we will be discussing sequentially: VPC flow logs AWS Config VPC flow logs VPC flow logs is a feature that allows us to capture information related to IP traffic that goes to and from the network interfaces within the VPC. VPC flow logs help in both troubleshooting related to why certain traffic is not reaching the EC2 instances and also understanding what the traffic is that is accepted and rejected. The VPC flow logs can be part of individual network interface level of an EC2 instance. This allows us to monitor how many packets are accepted or rejected in a specific EC2 instance running in the DMZ maybe. By default, the VPC flow logs are not enabled, so we will go ahead and enable the VPC flow log within our VPC: Enabling flow logs for VPC: In our environment, we have two VPCs named Development and Production. In this case, we will enable the VPC flow logs for development VPC: In order to do that, click on the Development VPC and select the Flow Logs tab. This will give you a button named Create Flow Log. Click on it and we can go ahead with the configuration procedure: Since the VPC flow logs data will be sent to CloudWatch, we need to select the IAM Role that gives these permissions: Before we go ahead in creating our first flow log, we need to create the CloudWatch log group as well where the VPC flow logs data will go into. In order to do it, go to CloudWatch, select the Logs tab. Name the log group according to what you need and click on Create log group: Once we have created our log group, we can fill the Destination Log Group field with our log group name and click on the Create Flow Log button: Once created, you will see the new flow log details under the VPC subtab: Create a test setup to check the flow: In order to test if everything is working as intended, we will start our test OpenVPN instance and in the security group section, allow inbound connections on port 443 and icmp (ping). This gives us the perfect base for a plethora of attackers detecting our instance and running a plethora of attacks on our server: Analyze flow logs in CloudWatch: Before analyzing for flow logs, I went for a small walk so that we can get a decent number of logs when we examine; thus, when I returned, I began analyzing the flow logs data. If we observe the flow log data, we see plenty of packets, which have REJECT OK at the end as well as ACCEPT OK. Flow logs can be unto specific interface levels, which are attached to EC2 instances. So, in order to check the flow logs, we need to go to CloudWatch, select the Log Groups tab, inside it select the log group that we created and then select the interface. In our case, we selected the interface related to the OpenVPN instance, which we had started: CloudWatch gives us the capability to filter packets based on certain expressions. We can filter all the rejected packets by creating a simple search for REJECT OK in the search bar and CloudWatch will give us all the traffic that was rejected. This is shown in the following image: Viewing the logs in GUI: Plain text data is good but it's not very appealing and does not give you deep insights about what exactly is happening. It's always preferred to send these logs to a Log Monitoring tool, which can give you deep insights about what exactly is happening. In my case, I have used Splunk to give us an overview about the logs in our environment. When we look into VPC Flow Logs, we see that Splunk gives us great detail in a very nice GUI and also maps the IP addresses to the location from which the traffic is coming: The following image is the capture of VPC flow logs which are being sent to the Splunk dashboard for analyzing the traffic patterns: The VPC Flow Logs traffic rate and location-related data The top rejected destination and IP address, which we rejected AWS Config AWS Config is a great service that allows us to continuously assess and audit the configuration of the AWS-related resources. With AWS Config, we can exactly see what configuration has changed from the previous week to today for services such as EC2, security groups, and many more. One interesting feature that Config allows is to set the compliance test as shown in the following screenshots. We see that there is one rule that is failing and is considered non-compliant, which is the CloudTrail. There are two important features that Config service provides: Evaluate changes in resources over the timeline Compliance checks Once they are enabled and you have associated Config rules accordingly, then you would see a dashboard similar to the following screenshot: In the preceding screenshot, on the left-hand side, Config gives details related to the Resources, which are present in your AWS; and on the right-hand column, Config gives us the status if the resources are compliant or non-compliant according to the rules that are set. Configuring the AWS Config service Let's look into how we can get started with the AWS Config service and have great dashboards along with compliance checks, which we saw in the previous screenshot: Enabling the Config service: The first time when we want to start working with Config, we need to select the resources we want to evaluate. In our case, we will select both the region-specific resources as well as global resources such as IAM: Configure S3 and IAM: Once we decide to include all the resources, the next thing is to create an Amazon S3 bucket where AWS Config will store the configuration and snapshot files. We will also need to select IAM role, which will allow Config into put these files to the S3 bucket: Select Config rules: Configuration rules are checks against your AWS resources, which can be done and the result will be part of the compliance standard. For example, root-account-mfa-enabled rule will check whether the ROOT account has MFA enabled or disabled and in the end it will give you a nice graphical overview about the output of the checks conducted by the rules. Currently, there are 38 AWS-managed rules, which we can select and use anytime; however, we can have custom rules anytime as well. For our case, I will use five specific rules, which are as follows: cloudtrail-enabled iam-password-policy restricted-common-ports restricted-ssh root-account-mfa-enabled Config initialization: With the Config rules selected, we can click on Finish and AWS Config will start, and it will start to check resources and its associated rules. You might get the dashboard similar to the following screenshot, which speaks about the available resources as well as the rule compliance related graphs: Let's analyze the functionality For demo purposes, I decided to disable the CloudTrail service and if we then look into the Config dashboard, it says that one rule check has been failed: Instead of graphs, Config can also show the resources in a tabular manner if we want to inspect the Config rules with the associated names. This is illustrated in the following diagram: Evaluating changes to resources AWS Config allows us to evaluate the configuration changes that have been made to the resources. This is a great feature that allows us to see how our resource looked a day, a week, or even months back. This feature is particularly useful specifically during incidents when, during investigation, one might want to see what exactly changed before the incident took place. It will help things go much faster. In order to evaluate the changes, we will need to perform the following steps: Go to AWS Config | Resources. This will give you the Resource inventory page in which you can either search for resources based on the resource type or based on tags. For our use case, I am searching for a tag value for an EC2 Instance whose name is OpenVPN: When we go inside the Config timeline, we see the overall changes that have been made to the resource. In the following screenshot, we see that there were a few changes that were made, and Config also shows us the time the changes that were made to the resource: When we click on Changes, it will give you the exact detail on what was the exact change that was made. In our case, it is related to the new network interface, which was attached to the EC2 instance. It displays the network interface ID, description along with the IP address, and the security group, which is attached to that network interface: When we start to integrate the AWS services with Splunk or similar monitoring tools, we can get great graphs, which will help us evaluate things faster. On the side, we always have the logs from the CloudTrail, if we want to see the changes that occurred in detail. We covered log monitoring and how to choose the right log monitoring tool for continuous security monitoring policy. Check out the book Enterprise Cloud Security and Governance to build resilient cloud architectures for tackling data disasters with ease. Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM) Monitoring, Logging, and Troubleshooting Analyzing CloudTrail Logs using Amazon Elasticsearch
Read more
  • 0
  • 0
  • 3109

article-image-python-founder-guido-van-rossum-goes-on-a-permanent-vacation-from-being-bdfl
Savia Lobo
13 Jul 2018
5 min read
Save for later

Python founder resigns - Guido van Rossum goes ‘on a permanent vacation from being BDFL’

Savia Lobo
13 Jul 2018
5 min read
Python is one of the most popular scripting languages widely adopted and loved due to its simplicity.  Since its humble beginnings in the last 80s as an interpreter for the new, a simple-to-read scripting language, it has now come to dominate all of the tech world. Python has become a vital part of web development stacks such as Perl, PHP, and others have been core to domains like security. It is also used in current popular technologies such as AI, ML, and DL. After 28 years of successfully stewarding the Python community since inventing it back in Dec 1989, Guido van Rossum has decided to take himself out of the decision making process of the community as a Benevolent dictator for life (BDFL). Guido still promises to be a part of the core development group. He also added that he will be available to mentor people but most of the times the community will have to manage on their own. Benevolent dictator for life (BDFL) is a term that Guido's fellow Python enthusiasts came up with for him, as a joke, when discussing minutes of the meeting over email regarding leading Python’s development and adoption. Who will look after the Python community now? Guido Van Rossum said, "I am not going to appoint a successor". True to his leadership style, he has thrown his team of core developers into the deep end by asking them to consider what the Python community's new governance model could be. In his memo, he asked, "So what are you all going to do? Create a democracy? Anarchy? A dictatorship? A federation?" Guido's parting advice to the core dev team Guido expressed confidence in his team to continue to manage the day-to-day tasks and operations just as they’ve been doing under his leadership. The two things he wants the core developers and the community to think deeply about are: How the PEPs are decided and How will the new core developers be inducted? He also emphasized the importance of fostering the right community culture militantly through Python's Community Code of Conduct (CoC). He said, "if you don't like that document your only option might be to leave this group voluntarily. Perhaps there are issues to decide like when should someone be kicked out (this could be banning people from python-dev or python-ideas too since those are also covered by the CoC)." He assured the team that while he has stepped down as the BDFL and from all decision-making duties, he will continue to be an active member of the community and will now be more available as a mentor to those on the core development team. Guido's decision to quit seems to have stemmed partly from the physical, mental, and emotional toll that the role has taken on him over years. He concluded his thread on Transfer of Power by saying, "I'm tired, and need a very long break". How is the Python community taking this decision? The development team hopes Guido will make a come back after his well-deserved break. As a BDFL, Guido has provided them with consistency in design and taste. By having Guido as a monitor, the team has had a very consistent view of how the community should behave and this has been an asset for the whole team. Now they have four ways to explore to govern the Python community Find a new BDFL. This option seems highly unlikely as Guido’s legacy is irreplaceable. Besides, it is practically the least robust to rely on one person to take all key decisions and to commit their full time to the community. That person also needs to be well respected and accepted as a de facto head. Set up an N-virate leadership team (a group of 3 (triumvirate) or 5 (quintumvirate) experts). With such a model, the responsibilities and load will be equally distributed among the chosen members from the core development team. This appears to be the current favorite on the thread that opened yesterday. Become a democracy. In this model, the community gets to vote on all key decisions. This seems like the short-term fix the team is gravitating towards. At least to decide on the immediate task at hand. But many on the team acknowledge that this is not a permanent answer as it will pull the language in too many directions and also is time-consuming. Explore the governance model of other open source communities. This option is as being seriously considered in the discussions. Clearly, the community loves Guido, evident from the deluge of well wishes he's receiving from all over the globe. You know you've done your job well when you hear someone say 'You changed my life'. Guido has changed millions of lives for the better. https://twitter.com/AndrewYNg/status/1017664116482162689 https://twitter.com/anthonypjshaw/status/1017610576640393216 https://twitter.com/generativist/status/1017547690228396032 https://twitter.com/bloodyquantum/status/1017558674024218624 Thank you, Guido, for Python, your heart, and your leadership. We know the community will thrive even in your absence because you've cultivated an excellent culture and a great set of minds. Top 7 Python programming books you need to read Python web development: Django vs Flask in 2018 Python experts talk Python on Twitter: Q&A Recap
Read more
  • 0
  • 0
  • 6884
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-how-to-secure-your-raspberry-pi-board-tutorial
Gebin George
13 Jul 2018
10 min read
Save for later

How to secure your Raspberry Pi board [Tutorial]

Gebin George
13 Jul 2018
10 min read
In this Raspberry Pi tutorial,  we will learn to secure our Raspberry Pi board. We will also learn to implement and enable the security features to make the Pi secure. This article is an excerpt from the book, Internet of Things with Raspberry Pi 3,  written by Maneesh Rao. Changing the default password Every Raspberry Pi that is running the Raspbian operating system has the default username pi and default password raspberry, which should be changed as soon as we boot up the Pi for the first time. If our Raspberry Pi is exposed to the internet and the default username and password has not been changed, then it becomes an easy target for hackers. To change the password of the Pi in case you are using the GUI for logging in, open the menu and go to Preferences and Raspberry Pi Configuration, as shown in Figure 10.1: Within Raspberry Pi Configuration under the System tab, select the change password option, which will prompt you to provide a new password. After that, click on OK and the password is changed (refer Figure 10.2): If you are logging in through PuTTY using SSH, then open the configuration setting by running the sudo raspi-config command, as shown in Figure 10.3: On successful execution of the command, the configuration window opens up. Then, select the second option to change the password and finish, as shown in Figure 10.4: It will prompt you to provide a new password; you just need to provide it and exit. Then, the new password is set. Refer to Figure 10.5: Changing the username All Raspberry Pis come with the default username pi, which should be changed to make it more secure. We create a new user and assign it all rights, and then delete the pi user. To add a new user, run the sudo adduser adminuser command in the terminal. It will prompt for a password; provide it, and you are done, as shown in Figure 10.6: Now, we will add our newly created user to the sudo group so that it has all the root-level permissions, as shown in Figure 10.7: Now, we can delete the default user, pi, by running the sudo deluser pi command. This will delete the user, but its repository folder /home/pi will still be there. If required, you can delete that as well. Making sudo require a password When a command is run with sudo as the prefix, then it'll execute it with superuser privileges. By default, running a command with sudo doesn't need a password, but this can cost dearly if a hacker gets access to Raspberry Pi and takes control of everything. To make sure that a password is required every time a command is run with superuser privileges, edit the 010_pi-nopasswd file under /etc/sudoers.d/ by executing the command shown in Figure 10.8: This command will open up the file in the nano editor; replace the content with pi ALL=(ALL) PASSWD: ALL, and save it. Updated Raspbain operating system To get the latest security updates, it is important to ensure that the Raspbian OS is updated with the latest version whenever available. Visit https://www.raspberrypi.org/documentation/raspbian/updating.md to learn the steps to update Raspbain. Improving SSH security SSH is one of the most common techniques to access Raspberry Pi over the network and it becomes necessary to use if you want to make it secure. Username and password security Apart from having a strong password, we can allow and deny access to specific users. This can be done by making changes in the sshd_config file. Run the sudo nano /etc/ssh/sshd_config command. This will open up the sshd_config file; then, add the following line(s) at the end to allow or deny specific users: To allow users, add the line: AllowUsers tom john merry To deny users, add this line: DenyUsers peter methew For these changes to take effect, it is necessary to reboot the Raspberry Pi. Key-based authentication Using a public-private key pair for authenticating a client to an SSH server (Raspberry Pi), we can secure our Raspberry Pi from hackers. To enable key-based authentication, we first need to generate a public-private key pair using tools called PuTTYgen for Windows and ssh-keygen for Linux. Note that a key pair should be generated by the client and not by Raspberry Pi. For our purpose, we will use PuTTYgen for generating the key pair. Download PuTTY from the following web link: Note that puTTYgen comes with PuTTY, so you need not install it separately. Open the puTTYgen client and click on Generate, as shown in Figure 10.9: Next, we need to hover the mouse over the blank area to generate the key, as highlighted in Figure 10.10: Once the key generation process is complete, there will be an option to save the public and private keys separately for later use, as shown in Figure 10.11—ensure you keep your private key safe and secure: Let's name the public key file rpi_pubkey, and the private key file rpi_privkey.ppk and transfer the public key file rpi_pubkey from our system to Raspberry. Log in to Raspberry Pi and under the user repository, which is /home/pi in our case, create a special directory with the name .ssh, as shown in Figure 10.12: Now, move into the .ssh directory using the cd command and create/open the file with the name authorized_keys, as shown in Figure 10.13: The nano command opens up the authorized_keys file in which we will copy the content of our public key file, rpi_pubkey. Then, save (Ctrl + O) and close the file (Ctrl + X). Now, provide the required permissions for your pi user to access the files and folders. Run the following commands to set permissions: chmod 700 ~/.ssh/ (set permission for .ssh directory) chmod 600 ~/.ssh/authorized_keys (set permission for key file) Refer to Figure 10.14, which shows the permissions before and after running the chmod commands: Finally, we need to disable the password logins to avoid unauthorized access by editing the /etc/ssh/sshd_config file. Open the file in the nano editor by running the following command: sudo nano etc/ssh/sshd_config In the file, there is a parameter #PasswordAuthentication yes. We need to uncomment the line by removing # and setting the value to no: PasswordAuthentication no Save (Ctrl + O) and close the file (Ctrl + X). Now, password login is prohibited and we can access the Raspberry Pi using the key file only. Restart Raspberry Pi to make sure all the changes come into effect with the following command: sudo reboot Here, we are assuming that both Raspberry Pi and the system that is being used to log in to Pi are one and the same. Now, you can log in to Raspberry Pi using PuTTY. Open the PuTTY terminal and provide the IP address of your Pi. On the left-hand side of the PuTTY window, under Category, expand SSH as shown in Figure 10.15: Then, select Auth, which will provide the option to browse and upload the private key file, as shown in Figure 10.16: Once the private key file is uploaded, click on Open and it will log in to Raspberry Pi successfully without any password. Setting up a firewall There are many firewall solutions available for Linux/Unix-based operating systems, such as Raspbian OS in the case of Raspberry Pi. These firewall solutions have IP tables underneath to filter packets coming from different sources and allow only the legitimate ones to enter the system. IP tables are installed in Raspberry Pi by default, but are not set up. It is a bit tedious to set up the default IP table. So, we will use an alternate tool, Uncomplicated Fire Wall (UFW), which is extremely easy to set up and use ufw. To install ufw, run the following command (refer to Figure 10.17): sudo apt install ufw Once the download is complete, enable ufw (refer to Figure 10.18) with the following command: sudo ufw enable If you want to disable the firewall (refer to Figure 10.20), use the following command: sudo ufw disable Now, let's see some features of ufw that we can use to improve the safety of Raspberry Pi. Allow traffic only on a particular port using the allow command, as shown in Figure 10.21: Restrict access on a port using the deny command, as shown in Figure 10.22: We can also allow and restrict access for a specific service on a specific port. Here, we will allow tcp traffic on port 21 (refer to Figure 10.23): We can check the status of all the rules under the firewall using the status command, as shown in Figure 10.24: Restrict access for particular IP addresses from a particular port. Here, we deny access to port 30 from the IP address 192.168.2.1, as shown in Figure 10.25: To learn more about ufw, visit https://www.linux.com/learn/introduction-uncomplicated-firewall-ufw. Fail2Ban At times, we use our Raspberry Pi as a server, which interacts with other devices that act as a client for Raspberry Pi. In such scenarios, we need to open certain ports and allow certain IP addresses to access them. These access points can become entry points for hackers to get hold of Raspberry Pi and do damage. To protect ourselves from this threat, we can use the fail2ban tool. This tool monitors the logs of Raspberry Pi traffic, keeps a check on brute-force attempts and DDOS attacks, and informs the installed firewall to block a request from that particular IP address. To install Fail2Ban, run the following command: sudo apt install fail2ban Once the download is completed successfully, a folder with the name fail2ban is created at path /etc. Under this folder, there is a file named jail.conf. Copy the content of this file to a new file and name it jail.local. This will enable fail2ban on Raspberry Pi. To copy, you can use the following command: sudo /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Now, edit the file using the nano editor: sudo nano /etc/fail2ban/jail.local Look for the [ssh] section. It has a default configuration, as shown in Figure 10.26: This shows that Fail2Ban is enabled for ssh. It checks the port for ssh connections, filters the traffic as per conditions set under in the sshd configuration file located at path etcfail2banfilters.dsshd.conf, parses the logs at /var/log/auth.log for any suspicious activity, and allows only six retries for login, after which it restricts that particular IP address. The default action taken by fail2ban in case someone tries to hack is defined in jail.local, as shown in Figure 10.27: This means when the iptables-multiport action is taken against any malicious activity, it runs as per the configuration in /etc/fail2ban/action.d/iptables-multiport.conf. To summarize, we learned how to secure our Raspberry Pi single-board. If you found this post useful, do check out the book Internet of Things with Raspberry Pi 3, to interface various sensors and actuators with Raspberry Pi 3 to send data to the cloud. Build an Actuator app for controlling Illumination with Raspberry Pi 3 Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 19563

article-image-automate-tasks-using-azure-powershell-and-azure-cli-tutorial
Gebin George
12 Jul 2018
5 min read
Save for later

Automate tasks using Azure PowerShell and Azure CLI [Tutorial]

Gebin George
12 Jul 2018
5 min read
It is no surprise that we commonly face repetitive and time-consuming tasks. For example, you might want to create multiple storage accounts. You would have to follow the same steps multiple times to get your job done. This is why Microsoft supports its Azure services with multiple ways of automating most of the tasks that can be implemented in Azure. In this Azure Powershell tutorial,  we will learn how to automate redundant tasks on Azure cloud. This article is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. Azure PowerShell PowerShell is commonly used with most Microsoft products, and Azure is no less important than any of these products. You can use Azure PowerShell cmdlets to manage Azure Networking tasks, however, you should be aware that Microsoft Azure has two types of cmdlets, one for the ASM model, and another for the ARM model. The main difference between cmdlets of the ASM model and the ARM model is, there will be an RM added to the cmdlet of the current portal. For example, if you want to create an ASM virtual network, you would use the following cmdlet: New-AzureVirtualNetwork But for the ARM model, you would use the following: New-AzureRMVirtualNetwork Often, this would be the case. But a few Cmdlets are totally different and some others don't even exist in the ASM model and do exist in the ARM model. By default, you can use Azure PowerShell cmdlets in Windows PowerShell, but you will have to install its module first. Installing the Azure PowerShell module There are two ways of installing the Azure PowerShell module on Windows: Download and install the module from the following link: https://www.microsoft.com/web/downloads/platform.aspx Install the module from PowerShell Gallery Installing the Azure PowerShell module from PowerShell Gallery The following are the required steps to get Azure PowerShell installed: Open PowerShell in an elevated mode. To install the Azure PowerShell module for the current portal run the following cmdlet Install-Module AzureRM. If your PowerShell requires a NuGet provider you will be asked to agree to install it, and you will have to agree for the installation policy modification, as the repository is not available on your environment, as shown in the following screenshot: Creating a virtual network in Azure portal using PowerShell To be able to run your PowerShell cmdlets against Azure successfully, you need to log in first to Azure using the following cmdlet: Login-AzureRMAccount Then, you will be prompted to enter the credentials of your Azure account. Voila! You are logged in and you can run Azure PowerShell cmdlets successfully. To create an Azure VNet, you first need to create the subnets that will be attached to this virtual network. Therefore, let's get started by creating the subnets: $NSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name NSubnet -AddressPrefix 192.168.1.0/24 $GWSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name GatewaySubnet -AddressPrefix 192.168.2.0/27 Now you are ready to create a virtual network by triggering the following cmdlet: New-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Location WestEurope -Name PSVNet -AddressPrefix 192.168.0.0/16 -Subnet $NSubnet,$GWSubnet Congratulations! You have your virtual network up and running with two subnets associated to it, one of them is a gateway subnet. Adding address space to a virtual network using PowerShell To add an address space to a virtual network, you need to retrieve the virtual network first and store it in a variable by running the following cmdlet: $VNet = Get-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Name PSVNet Then, you can add the address space by running the following cmdlet: $VNet.AddressSpace.AddressPrefixes.Add("10.1.0.0/16") Finally, you need to save the changes you have made by running the following cmdlet: Set-AzureRmVirtualNetwork -VirtualNetwork $VNet Azure CLI Azure CLI is an open source, cross-platform that supports implementing all the tasks you can do in Azure portal, with commands. Azure CLI comes in two flavors: Azure CLI 2.0: Which supports only the current Azure portal Azure CLI 1.0: Which supports both portals Throughout this book, we will be using Azure CLI 2.0, so let's get started with its installation. Installing Azure CLI 2.0 Perform the following steps to install Azure CLI 2.0: Download Azure CLI 2.0, from the following link: https://azurecliprod.blob.core.windows.net/msi/azure-cli-2.0.22.msi Once downloaded, you can start the installation: Once you click on Install, it will start to validate your environment to check whether it is compatible with it or not, then it starts the installation: Once the installation completes, you can click on Finish, and you are good to go: Once done, you can open cmd, and write az to access Azure CLI commands: Creating a virtual network using Azure CLI 2.0 To create a virtual network using Azure CLI 2.0, you have to follow these steps: Log in to your Azure account using the following command az login, you have to open the URL that pops up on the CLI, and then enter the following code: To create a new virtual network, you need to run the following command: az network vnet create --name CLIVNet --resource-group PacktPub --location westeurope --address-prefix 192.168.0.0/16 --subnet-name s1 --subnet-prefix 192.168.1.0/24 Adding a gateway subnet to a virtual network using Azure CLI 2.0 To add a gateway subnet to a virtual network, you need to run the following command: az network vnet subnet create --address-prefix 192.168.7.0/27 --name GatewaySubnet --resource-group PacktPub --vnet-name CLIVNet Adding an address space to a virtual network using Azure CLI 2.0 To add an address space to a virtual network, you can run the following command: az network vnet update address-prefixes –add <Add JSON String> Remember that you will need to add a JSON string that describes the address space. To summarize, we learned how to automate cloud tasks using PowerShell and Azure CLI. Check out the book Hands-On Networking with Azure, to learn how to build large-scale, real-world apps using Azure networking solutions. Creating Multitenant Applications in Azure Fine Tune Your Web Application by Profiling and Automation Putting Your Database at the Heart of Azure Solutions
Read more
  • 0
  • 0
  • 9729

article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 3790

article-image-create-a-teamcity-project-tutorial
Gebin George
12 Jul 2018
3 min read
Save for later

Create a TeamCity project [Tutorial]

Gebin George
12 Jul 2018
3 min read
TeamCity is one of the most prominent tools used by DevOps professionals to perform continuous integration and delivery, effectively. It plays an important role when it comes to Mobile-level DevOps implementation. In this article, we will see how to create a TeamCity Project. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Once the installation is done, the TeamCity web user interface will open in the browser and we can create a new TeamCity project there. To do so, follow these steps: Once you have logged in to TeamCity UI, click on Create project: To connect to our project from GitHub, click on From GitHub on the next screen: This will open a popup with instructions to add a TeamCity application to your GitHub account: Click on the register TeamCity link and it should take you to the GitHub page where you can register a new OAuth app. Give the details of the application, homepage URL, and callback URL, as shown in the following screenshot, and register the OAuth app: Once you register, on the next screen you'll get a Client ID and Client Secret; copy those details since they will be required for the TeamCity project: Go back to TeamCity, put the Client ID and Client Secret in the required fields, and click Save: Next, you need to do a one-time sign in to allow TeamCity to use GitHub repositories. Click on Sign in to GitHub: Authorize the TeamCity app to use GitHub by clicking on Authorize app: Once authorized, select the PhoneCallApp repository from the list of repositories shown on TeamCity: On the next screen, TeamCity will offer to create a new project from the URL selected. Give it a name and click Proceed: This should create two things. The first is a trigger in TeamCity for each code check-in you do; each will trigger a build. The second is a build step from the repository automatically: We need to configure the build steps manually and use the build scripts described in the Creating a build script section. Use those scripts, described sequentially in previous steps, to create the build steps in TeamCity. Finally, your build steps should look like the following screenshot, consisting of all the steps mentioned in the Creating a build script section: Now, your TeamCity continuous build is ready, and a trigger is already configured to perform this build on each code check-in, or whenever it finds any code changes in the repository. This finally provides you with an Android package that is ready to be distributed. To summarize, we created a TeamCity project for Mobile DevOps. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your application development lifecycle. Introduction to TeamCity Getting Started with TeamCity Jenkins 2.0: The impetus for DevOps Movement
Read more
  • 0
  • 0
  • 3008
article-image-debugging-xamarin-application-on-visual-studio-tutorial
Gebin George
11 Jul 2018
6 min read
Save for later

Debugging Xamarin Application on Visual Studio [Tutorial]

Gebin George
11 Jul 2018
6 min read
Visual Studio is a great IDE for debugging any application, whether it's a web, mobile, or a desktop application. It uses the same debugger that comes with the IDE for all three, and is very easy to follow. In this tutorial,  we will learn how to debug a mobile application using Visual studio. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Using the output window The output window in Visual Studio is a window where you can see the output of what's happening. To view the output window in Visual Studio, follow these steps: Go to View and click Output: This will open a small window at the bottom where you can see the current and useful output being written by Visual Studio. For example, this is what is shown in the output windows when we rebuild the application: Using the Console class to show useful output The Console class can be used to print some useful information, such as logs, to the output window to get an idea of what steps are being executed. This can help if a method is failing after certain steps, as that will be printed in the output window. To achieve this, C# has the Console class, which is a static class. This class has methods such as Write() and WriteLine() to write anything to the output window. The Write() method writes anything to the output window, and the WriteLine() method writes the same way with a new line at the end: Look at the following screenshot and analyze how Console.WriteLine() is used to break down the method into several steps (it is the same Click event method that was written while developing PhoneCallApp): Add Console.WriteLine() to your code, as shown in the preceding screenshot. Now, run the application, perform the operation, and see the output written as per your code: This way, Console.WriteLine() can be used to write useful step-based outputs/logs to the output window, which can be analyzed to identify issues while debugging. Using breakpoints As described earlier, breakpoints are a great way to dig deep into the code without much hassle. They can help check variables and their values, and the flow at a point or line in the code. Using breakpoints is very simple: The simplest way to add a breakpoint on a line is to click on the margin, which is on the left side, in front of the line, or click on the line and hit the F9 key: You'll see a red dot in the margin area where you clicked when the breakpoint is set, as shown in the preceding screenshot. Now, run the application and perform a call button click on it; the flow should stop at the breakpoint and the line will turn yellow when it does: At this point, you can inspect the values of variables before the breakpoint line by hovering over them: Setting a conditional breakpoint You can also set a conditional breakpoint in the code, which is basically telling Visual Studio to pause the flow only when a certain condition is met: Right-click on the breakpoint set in the previous steps, and click Conditions: This will open a small window over the code to set a condition for the breakpoint. For example, in the following screenshot, a condition is set to when phoneNumber == "9900000700". So, the breakpoint will only be hit when this condition is met; otherwise, it'll not be hit. Stepping through the code When a breakpoint has been reached, the debug tools enable you to get control over the program's execution flow. You'll see some buttons in the toolbar, allowing you to run and step through the code: You can hover over these buttons to see their respective names: Step Over (F10): This executes the next line of code. Step Over will execute the function if the next line is a function call, and will stop after the function: Step Into (F11): Step Into will stop at the next line in the case of a function call, allowing you to continue line-by-line debugging of the function. If the next line is not a function, it will behave the same as Step Over: Step Out (Shift + F11): This will return to the line where the current function was called: Continue: This will continue the execution and run until the next breakpoint is reached: Stop Debugging: This will stop the debugging process: Using a watch A watch is a very useful function in debugging; it allows us to see the values, types, and other details related to variables, and evaluate them in a better way than hovering over the variables. There are two types of watch tools available in Visual Studio: QuickWatch QuickWatch is similar to watch, but as the name suggests, it allows us to evaluate the values at the time. Follow these steps to use QuickWatch in Visual Studio: Right-click on the variable you want to analyze and click on QuickWatch: This will open a new window where you can see the type, value, and other details related to the variable: This is very useful when a variable has a long value or string that cannot be read and evaluated properly by just hovering over the variable. Adding a watch Adding a watch is similar to QuickWatch, but it is more useful when you have multiple variables to analyze, and looking at each variable's value can take a lot of time. Follow these steps to add a watch on variables: Right-click on the variable and click Add Watch: This will add the variable to watch and show you its value always, as well as reflect any time it changes at runtime. You can also see these variable values in a particular format for different data types, so you can have an XML value shown in XML format, or a JSON object value shown in .json format: It is a lifesaver when you want to evaluate a variable's value in each step of the code, and see how it changes with every line. To summarize, we learned how to debug a Xamarin application using Visual Studio. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your mobile application development process. Debugging Your.Net Debugging in Vulkan Debugging Your .NET Application
Read more
  • 0
  • 0
  • 9842

article-image-tensorflow-1-9-is-now-generally-available
Savia Lobo
11 Jul 2018
3 min read
Save for later

Tensorflow 1.9 is now generally available

Savia Lobo
11 Jul 2018
3 min read
After the back-to-back release of Tensorflow 1.9 release candidates, rc-0, rc-1, and rc-2, the final version TensorFlow 1.9 is out and generally available. Key highlights of this version include support for gradient boosted trees estimators, new keras layers to speed up GRU and LSTM implementations and tfe.Network deprecation. It also includes improved functions for supporting data loading, text processing and pre-made estimators. Tensorflow 1.9 major features and improvements As mentioned in Tensorflow 1.9 rc-2, new Keras-based get started page and programmers guide page in the tf.Keras have been updated. The tf.Keras has been updated to Keras 2.1.6 API. One should try the newly added  tf.keras.layers.CuDNNGRU, used for a faster GRU implementation and tf.keras.layers.CuDNNLSTM layers, which allows faster LSTM implementation. Both these layers are backed by cuDNN( NVIDIA CUDA Deep Neural Network library (cuDNN)). Gradient boosted trees estimators, a non-parametric statistical learning technique for  classification and regression, are now supported by core feature columns and losses. Also, the python interface for the TFLite Optimizing Converter has been expanded, and the command line interface (AKA: toco, tflite_convert) is once again included in the standard pip installation. The distributions.Bijector API in the TF version 1.9 also supports broadcasting for Bijectors with the new API changes. Tensorflow 1.9 also includes improved data-loading and text processing with tf.decode_compressed, tf.string_strip, and Tf.strings.regex_full_match. It also has an added experimental support for new pre-made estimators like tf.contrib.estimator.BaselineEstimator, tf.contrib.estimator.RNNClassifier, tf.contrib.estimator.RNNEstimator. This version includes two breaking changes. Firstly for opening up empty variable scopes one can replace variable_scope('', ...) by variable_scope(tf.get_variable_scope(), ...), which is used to get the current scope of the variable. And the second breakthrough change is, headers used for building custom ops have been moved to a different file path. From site-packages/external to site-packages/tensorflow/include/external. Some bug fixes and other changes include: The tfe.Network has been deprecated Layered variable names have changed in the following conditions: Using tf.keras.layers with custom variable scopes. Using tf.layers in a subclassed tf.keras.Model class. Added the ability to pause recording operations for gradient computation via tf.GradientTape.stop_recording in the Eager execution and updated its documentation and introductory notebooks. Fixed an issue in which the TensorBoard Debugger Plugin, which could not handle total source file size exceeding gRPC message size limit (4 MB). Added GCS Configuration Ops and complex128 support to FFT, FFT2D, FFT3D, IFFT, IFFT2D, and IFFT3D. Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter now supports arbitrary. Prevents tf.gradients() from backpropagating through integer tensors. LinearOperator[1D,2D,3D]Circulant added to tensorflow.linalg. To know more about the other changes, visit TensorFlow 1.9 release notes on GitHub. Create a TensorFlow LSTM that writes stories [Tutorial] Build and train an RNN chatbot using TensorFlow [Tutorial] Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial]
Read more
  • 0
  • 0
  • 4247

article-image-implementing-unity-2017-game-audio-tutorial
Amarabha Banerjee
11 Jul 2018
11 min read
Save for later

Implementing Unity 2017 Game Audio [Tutorial]

Amarabha Banerjee
11 Jul 2018
11 min read
Background music and audio effects play a big role in determining any game's success or failure. Creating engaging game audio, importing audio from other sources and working and customizing Audio FX clips as per the game flow is a vital task for any game developer.  In this article, we are going to discuss about how to create, customize and use third party audio in Unity games. This article is a part of the book titled "Unity 2017 2D Game Development Projects" written by Lauren S. Ferro & Francesco Sapio. Basics of audio and sound FX in Unity Adding sound in Unity is simple enough, but you can implement it better if you understand how sound travels. While this is extremely important in 3D games because of the added third dimension, it is quite important in 2D games, just in a slightly different way. Before we discuss the differences, let's first learn about what and how sound works from a quick physics lesson. Listening to the physics behind sound What we hear is not just music, sound effects (FX) and ambient background noise. The sound is a longitudinal, mechanical (vibrating) wave. These "waves" can pass through different mediums (for example, air, water, your desk) but not through a vacuum. Therefore, no one will hear your screams in space. The sound is a variation in pressure. A region of increased pressure on a sound wave is called a compression (or condensation). A region of decreased pressure on a sound wave is called a rarefaction (or dilation). You can see this concept illustrated in the following image: The density of certain materials, such as glass and plastic, allows a certain amount of light to pass through them. This will influence how the light will behave when it passes through them, such as bending/refracting (that is, the index of refraction), various materials (for example, liquids, solids, gases) have the same effect when it comes to allowing sound waves to pass. Some materials allow the sound to pass easily, while others dampen it. Therefore, sound studios/booths are made of certain materials to remove things such as echoes. It has a similar effect to when you scream underwater that there is a shark. It won't be as loud as if you scream from your kitchen to tell everyone dinner is ready. Another thing to consider is what is known as the Doppler Effect. The Doppler Effect results from an increase (or decrease) in the frequency of sound (and other things such as light, ripples in water) as the source of the sound and person/player move toward (or away from) each other. A simple example of this is when an emergency vehicle passes by you. You will notice that the sound of the siren is different before it reaches you when it is near you, and once it passes you. Considering this example, it is because there is a sudden change in pitch in the passing siren. This is visualized in the following image: So, what is the point of knowing this when it comes to developing games? Well, this is particularly important when creating games, more so in 3D, in relation to how sounds are heard by players in many ways. For example, imagine that you're nearing a creek, but there are dense bushes, large pine trees, and a rugged terrain. The sound that creek makes from where a player is in the game world is going to sound very different if it was a completely flat plane free from any vegetation. When it comes to 2D games, this is not necessarily as important because we are working without depth (z-axis) but similar principles apply when players may be navigating around a top-down environment and they are near a point of interest. You don't want that sound to be as loud when the player is far away as it would be if they were up close. Within the context of 2D and 3D sounds, Unity has a parameter for this exact thing called Spatial Blend. We will discuss this more in the Audio Source section. There are several ways that you can create audio within Unity, from importing your own/downloaded sounds to recording it live. Like images, Unity can import most standard audio file formats: AIFF, WAV, MP3, and Ogg, and tracker modules (for example, short instrument samples): .xm, .mod, .it, and .s3m. Importing audio Importing audio into Unity follows the same processes as importing any other type of asset. We will cover the basics of what you need to know in the following sections. Audio Listener Have you heard the saying, If a tree falls in a forest and no one is there to hear it, does it still make a sound? Well, in Unity, if there is nothing to hear your audio, then the answer is no. This is because Unity has a component called an Audio Listener, which works like a microphone. To locate the Audio Listener, click the Main Camera, and then look over at the Inspector; it should be located near the bottom, like in the following image: If for some reason, it isn't there, you can always add it by clicking the following button titled Add Component, type Audio Listener, and select it (click it) from the list, like in the following image: The important thing to remember is that an Audio Listener is the location of the sound, so it makes sense as to why it is typically placed on the Main Camera, but it can also be placed on a Player. A single scene can only have one Audio Listener; therefore, it's best to experiment with the one that works best for your game. It is important to remember that an Audio Listener works with an Audio Source, and must have one to work. Audio Source The Audio Source is where the sound comes from. This can be from many different objects within a Scene as well as background and sound FX. The Audio Source has several parameters; later we will briefly discuss the main ones. To see more information about all the parameters, you can check out the official Unity documentation by visiting the link or scanning the QR code: https://docs.unity3d.com/2017.2/Documentation/Manual/class-AudioSource.html You may be wondering why we should have a slider for Spatial Blend, instead of a checkbox. This is because we need to fade between 2D and 3D, and there is a good reason for this. Imagine that you're in a game and you're looking at a screen on a computer. In this case, your camera is going to be fixated on whatever is on the screen. This could be checking an inventory or even entering nuclear codes. In any case, you will want the sound that is being emitted from the screen to be the focal audio. Therefore, the slider in the Spatial Blend parameter is going to be closer to 2D. This is because you may still want ambient noises that are in the background incorporated into the experience. So, if you are closer to 2D, the sound will be the same in both speakers (or headphones). The closer you slide toward 3D, the more the volume will depend on the proximity of the Sound Listener to the Sound Source. It will also allow for things, such as the Doppler Effect, to be more noticeable, as it takes in 3D space. There are also specific settings for these things. Choosing sounds for background and FX When it comes to picking the right kind of music for your game, just like the aesthetics, you need to think about what kind of "mood" you're trying to create. Is it a somber or uplifting kind of mood, are you ironically contrasting the graphics (for example, happy) with gloomy music? There is really no right or wrong when it comes to your musical selection if you can communicate to the player what they are supposed to feel, at least in general. For this game, I have provided you with some example "moods" that you can apply to this game. Of course, you're welcome to choose sounds other than this that are more to your liking! All the sounds that we will use will be from the Free Sound website: https://freesound.org. You will need to create an account to download them, but it's free and there are many great sounds that you can use when creating games. In saying this, if you're intending to create your games for commercial purposes, please make sure that you check the Terms and Conditions on Free Sound to make sure that you're not violating any of them. Each track will have its own attribution licenses, including those for commercial use, so always check! For this project, we're going to stick with the "Happy" version. But I encourage you to experiment! Happy Collecting Angel Cakes: Chime sound (https://freesound.org/people/jgreer/sounds/333629/) Being attacked by the enemy: Cat Purr/Twit4.wav (https://freesound.org/people/steffcaffrey/sounds/262309/) Collecting health: correct (https://freesound.org/people/ertfelda/sounds/243701/) Collecting bonuses: Signal-Ring 1 (https://freesound.org/people/Vendarro/sounds/399315/) Background: Kirmes_Orgel_004_2_Rosamunde.mp3 (https://freesound.org/people/bilwiss/sounds/24720/) Sad Collecting Angel Cakes: Glass Tap (https://freesound.org/people/Unicornaphobist/sounds/262958/) Being attacked by the enemy: musicbox1.wav (https://freesound.org/people/sandocho/sounds/17700/) Collecting health: chime.wav (https://freesound.org/people/Psykoosiossi/sounds/398661/) Collecting bonuses: short metallic hit (https://freesound.org/people/waveplay/sounds/366400/) Background: improvised chill 8 (https://freesound.org/people/waveplay/sounds/238529/) Retro Collecting Angel Cakes: TF_Buzz.flac (https://freesound.org/people/copyc4t/sounds/235652/) Being attacked by the enemy: Game Die (https://freesound.org/people/josepharaoh99/sounds/364929/) Collecting health: galanghee.wav (https://freesound.org/people/metamorphmuses/sounds/91387/) Collecting bonuses: SW05.WAV (https://freesound.org/people/mad-monkey/sounds/66684/) Background: Angel-techno pop music loop (https://freesound.org/people/frankum/sounds/387410/) Not everyone can hear well or at all, so it pays to keep this in mind when you're developing games that may rely on audio to provide feedback to players. While subtitles can enable dialogue to be more accessible, sound FX can be a little trickier. Therefore, when it comes to implementing audio, think about how you could complement it, even if the same effect that you're trying to achieve with sound is subtle. For example, if you play a "bleep" for every item collected, perhaps you could associate it with a slight glow or flash of color. The choice is up to you, but it's something to keep in mind. On the other end of the spectrum, those who can hear might also want to turn the sounds off. We've all played that game (or several) that really begins to become irritating, so make sure that you also check this while you're playtesting. You don't want an awesome game to suck because your audio is intolerable and there is not an option to TURN THE SOUND OFF! You’ve been warned. Integrating background music in our game Once you choose which music better suits the kind of feel you want to create for your game, import both the sound and the music inside the project. If you want, you can create two folders for them, SoundFX and Music, respectively. Now, in our scene, we need to do the following: Create an empty game object (by clicking GameObject | Create empty), rename it Background Music. Attach an Audio Source component (in the Inspector, click Add Component | Audio | Audio Source). Next, we need to drag and drop the music we decided on/downloaded into the AudioClip variable and check the Loop option, so the background music will never stop. Also, check that Play on Awake is checked as well, even if it should be by default, so the music will start playing as soon as the game starts. Hit Play to start the game. Lastly, adjust the volume, depending on the music you chose. This may require a bit of playtesting (remember to set the value after the play mode, because the settings you adjust during play mode are not kept). In the end, this is how the component should look (in the image, I chose the happy theme music, and set a Volume of 0.1): Here in this article we have shown you how to incorporate game audio effects and background music in Unity games. If you liked this article, then check out the complete book Unity 2017 2D Game Development Projects. AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 7330
article-image-writing-perform-test-functions-in-golang-tutorial
Natasha Mathur
10 Jul 2018
9 min read
Save for later

Writing test functions in Golang [Tutorial]

Natasha Mathur
10 Jul 2018
9 min read
Go is a modern programming language built for the 21st-century application development. Hardware and technology have advanced significantly over the past decade, and most of the other languages do not take advantage of these technological advancements.  Go allows us to build network applications that take advantage of concurrency and parallelism made available with multicore systems. Testing is an important part of programming, whether it is in Go or in any other language. Go has a straightforward approach to writing tests, and in this tutorial, we will look at some important tools to help with testing. This tutorial is an excerpt from the book ‘Distributed computing with Go’, written by V.N. Nikhil Anurag. There are certain rules and conventions we need to follow to test our code. They are as follows: Source files and associated test files are placed in the same package/folder The name of the test file for any given source file is <source-file-name>_test.go Test functions need to have the "Test" prefix, and the next character in the function name should be capitalized In the remainder of this tutorial, we will look at three files and their associated tests: variadic.go and variadic_test.go addInt.go and addInt_test.go nil_test.go (there isn't any source file for these tests) Along the way, we will introduce any concepts we might use. variadic.go function In order to understand the first set of tests, we need to understand what a variadic function is and how Go handles it. Let's start with the definition: Variadic function is a function that can accept any number of arguments during function call. Given that Go is a statically typed language, the only limitation imposed by the type system on a variadic function is that the indefinite number of arguments passed to it should be of the same data type. However, this does not limit us from passing other variable types. The arguments are received by the function as a slice of elements if arguments are passed, else nil, when none are passed. Let's look at the code to get a better idea: // variadic.go package main func simpleVariadicToSlice(numbers ...int) []int { return numbers } func mixedVariadicToSlice(name string, numbers ...int) (string, []int) { return name, numbers } // Does not work. // func badVariadic(name ...string, numbers ...int) {} We use the ... prefix before the data type to define a function as a variadic function. Note that we can have only one variadic parameter per function and it has to be the last parameter. We can see this error if we uncomment the line for badVariadic and try to test the code. variadic_test.go We would like to test the two valid functions, simpleVariadicToSlice, and mixedVariadicToSlice, for various rules defined above. However, for the sake of brevity, we will test these: simpleVariadicToSlice: This is for no arguments, three arguments, and also to look at how to pass a slice to a variadic function mixedVariadicToSlice: This is to accept a simple argument and a variadic argument Let's now look at the code to test these two functions: // variadic_test.go package main import "testing" func TestSimpleVariadicToSlice(t *testing.T) { // Test for no arguments if val := simpleVariadicToSlice(); val != nil { t.Error("value should be nil", nil) } else { t.Log("simpleVariadicToSlice() -> nil") } // Test for random set of values vals := simpleVariadicToSlice(1, 2, 3) expected := []int{1, 2, 3} isErr := false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3}") } // Test for a slice vals = simpleVariadicToSlice(expected...) isErr = false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3}") } } func TestMixedVariadicToSlice(t *testing.T) { // Test for simple argument & no variadic arguments name, numbers := mixedVariadicToSlice("Bob") if name == "Bob" && numbers == nil { t.Log("Recieved as expected: Bob, <nil slice>") } else { t.Errorf("Received unexpected values: %s, %s", name, numbers) } } Running tests in variadic_test.go Let's run these tests and see the output. We'll use the -v flag while running the tests to see the output of each individual test: $ go test -v ./{variadic_test.go,variadic.go} === RUN TestSimpleVariadicToSlice --- PASS: TestSimpleVariadicToSlice (0.00s) variadic_test.go:10: simpleVariadicToSlice() -> nil variadic_test.go:26: simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3} variadic_test.go:41: simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3} === RUN TestMixedVariadicToSlice --- PASS: TestMixedVariadicToSlice (0.00s) variadic_test.go:49: Received as expected: Bob, <nil slice> PASS ok command-line-arguments 0.001s addInt.go The tests in variadic_test.go elaborated on the rules for the variadic function. However, you might have noticed that TestSimpleVariadicToSlice ran three tests in its function body, but go test treats it as a single test. Go provides a good way to run multiple tests within a single function, and we shall look them in addInt_test.go. For this example, we will use a very simple function as shown in this code: // addInt.go package main func addInt(numbers ...int) int { sum := 0 for _, num := range numbers { sum += num } return sum } addInt_test.go You might have also noticed in TestSimpleVariadicToSlice that we duplicated a lot of logic, while the only varying factor was the input and expected values. One style of testing, known as Table-driven development, defines a table of all the required data to run a test, iterates over the "rows" of the table and runs tests against them. Let's look at the tests we will be testing against no arguments and variadic arguments: // addInt_test.go package main import ( "testing" ) func TestAddInt(t *testing.T) { testCases := []struct { Name string Values []int Expected int }{ {"addInt() -> 0", []int{}, 0}, {"addInt([]int{10, 20, 100}) -> 130", []int{10, 20, 100}, 130}, } for _, tc := range testCases { t.Run(tc.Name, func(t *testing.T) { sum := addInt(tc.Values...) if sum != tc.Expected { t.Errorf("%d != %d", sum, tc.Expected) } else { t.Logf("%d == %d", sum, tc.Expected) } }) } } Running tests in addInt_test.go Let's now run the tests in this file, and we are expecting each of the row in the testCases table, which we ran, to be treated as a separate test: $ go test -v ./{addInt.go,addInt_test.go} === RUN TestAddInt === RUN TestAddInt/addInt()_->_0 === RUN TestAddInt/addInt([]int{10,_20,_100})_->_130 --- PASS: TestAddInt (0.00s) --- PASS: TestAddInt/addInt()_->_0 (0.00s) addInt_test.go:23: 0 == 0 --- PASS: TestAddInt/addInt([]int{10,_20,_100})_->_130 (0.00s) addInt_test.go:23: 130 == 130 PASS ok command-line-arguments 0.001s nil_test.go We can also create tests that are not specific to any particular source file; the only criteria is that the filename needs to have the <text>_test.go form. The tests in nil_test.go elucidate on some useful features of the language which the developer might find useful while writing tests. They are as follows: httptest.NewServer: Imagine the case where we have to test our code against a server that sends back some data. Starting and coordinating a full blown server to access some data is hard. The http.NewServer solves this issue for us. t.Helper: If we use the same logic to pass or fail a lot of testCases, it would make sense to segregate this logic into a separate function. However, this would skew the test run call stack. We can see this by commenting t.Helper() in the tests and rerunning go test. We can also format our command-line output to print pretty results. We will show a simple example of adding a tick mark for passed cases and cross mark for failed cases. In the test, we will run a test server, make GET requests on it, and then test the expected output versus actual output: // nil_test.go package main import ( "fmt" "io/ioutil" "net/http" "net/http/httptest" "testing" ) const passMark = "u2713" const failMark = "u2717" func assertResponseEqual(t *testing.T, expected string, actual string) { t.Helper() // comment this line to see tests fail due to 'if expected != actual' if expected != actual { t.Errorf("%s != %s %s", expected, actual, failMark) } else { t.Logf("%s == %s %s", expected, actual, passMark) } } func TestServer(t *testing.T) { testServer := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { path := r.RequestURI if path == "/1" { w.Write([]byte("Got 1.")) } else { w.Write([]byte("Got None.")) } })) defer testServer.Close() for _, testCase := range []struct { Name string Path string Expected string }{ {"Request correct URL", "/1", "Got 1."}, {"Request incorrect URL", "/12345", "Got None."}, } { t.Run(testCase.Name, func(t *testing.T) { res, err := http.Get(testServer.URL + testCase.Path) if err != nil { t.Fatal(err) } actual, err := ioutil.ReadAll(res.Body) res.Body.Close() if err != nil { t.Fatal(err) } assertResponseEqual(t, testCase.Expected, fmt.Sprintf("%s", actual)) }) } t.Run("Fail for no reason", func(t *testing.T) { assertResponseEqual(t, "+", "-") }) } Running tests in nil_test.go We run three tests, where two test cases will pass and one will fail. This way we can see the tick mark and cross mark in action: $ go test -v ./nil_test.go === RUN TestServer === RUN TestServer/Request_correct_URL === RUN TestServer/Request_incorrect_URL === RUN TestServer/Fail_for_no_reason --- FAIL: TestServer (0.00s) --- PASS: TestServer/Request_correct_URL (0.00s) nil_test.go:55: Got 1. == Got 1. --- PASS: TestServer/Request_incorrect_URL (0.00s) nil_test.go:55: Got None. == Got None. --- FAIL: TestServer/Fail_for_no_reason (0.00s) nil_test.go:59: + != - FAIL exit status 1 FAIL command-line-arguments 0.003s We looked at how to write test functions in Go, and learned a few interesting concepts when dealing with a variadic function and other useful test functions. If you found this post useful, do check out the book  'Distributed Computing with Go' to learn more about testing, Goroutines, RESTful web services, and other concepts in Go. Why is Go the go-to language for cloud-native development? – An interview with Mina Andrawos Systems programming with Go in UNIX and Linux How to build a basic server-side chatbot using Go
Read more
  • 0
  • 0
  • 13202

article-image-amds-293-million-jv-with-chinese-chipmaker-hygon-starts-production-of-x86-cpus
Natasha Mathur
10 Jul 2018
3 min read
Save for later

AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs

Natasha Mathur
10 Jul 2018
3 min read
Chinese chip producer Hygon begins production of China-made x86 processors named “Dhyana”. These processors use AMD’s Zen microarchitecture and are the result of the licensing deal between AMD and its Chinese partners. Hygon has started shipping the new “Dhyana” x86 CPUs. According to the official statements made by AMD, it does not permit the selling of the final chip designs to its partners in China instead it encourages their partners to design their own processors that suit the needs of the Chinese server market. This is an effort to break China’s dependency on the foreign technology market. In 2016, AMD announced that it is working on a joint project in China to develop processors. This provided AMD with a $293 million in cash by Tianjin Haiguang Advanced Technology Investment Co. (THATIC). THATIC includes AMD as well as the Chinese Academy of Sciences. What’s interesting is that AMD managed to establish a license allowing Chinese processor manufacturers to develop and sell x86 processors despite the fact that Intel was blocked from selling Xeon processors to China in 2015 by the Obama administration. This happened over concerns that the chips would help China’s nuclear weapon programs. Dhyana processors are focusing on embedded applications currently. It is a System on chip ( SoC) instead of a socketed chip. But this design doesn’t limit the Dhyana processors from being used in high-performance or data center applications, which usually leverages Intel Xeon and other server processors. Also, Linux kernels developers have stated that the x86 processors are very close in design to that of AMD’s EPYC. In fact, when moving the Linux kernel code for EPYC processors to the Hygon chips, it required fewer than 200 new lines of code, according to a report from Michael Larabel of Phoronix. The only difference between the two is the vendor IDs and family series. Apart from AMD, there are other chip-producing ventures that China is heavily engrossed in. One such venture is Zhaoxin Semiconductor that is working to manufacture x86 chips through a partnership with VIA. China is making continuous efforts to free the country from US interventions and to change their long-term processor market. There are implications that most of the x86 processors are 14nm chips, but there isn’t much information available on the capabilities of the Dhyana family. Also, other details of their manufacturing characteristics are still to be known. Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo
Read more
  • 0
  • 0
  • 3192