Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-how-hackers-are-using-deepfakes-to-trick-people
Guest Contributor
02 Oct 2019
7 min read
Save for later

How hackers are using Deepfakes to trick people

Guest Contributor
02 Oct 2019
7 min read
Cybersecurity analysts have warned that spoofing using artificial intelligence is within the realm of possibility and that people should be aware of the possibility of getting fooled with such voice or picture-based deepfakes. What is Deepfake? Deepfakes rely on a branch of AI called Generative Adversarial Networks (GANs). It requires two machine learning networks that teach each other with an ongoing feedback loop. The first one takes real content and alters it. Then, the second machine learning network, known as the discriminator, tests the authenticity of the changes. As the machine learning networks keep passing the material back and forth and receiving feedback about it, they get smarter. GANs are still in the early stages, but people expect numerous potential commercial applications. For example, some can convert a single image into different poses. Others can suggest outfits similar to what a celebrity wears in a photo or turn a low-quality picture into a high-resolution snapshot. But, outside of those helpful uses, deepfakes could have sinister purposes. Consider the blowback if a criminal creates a deepfake video of something that would hurt someone's reputation — for instance, a deepfake video of a politician "admitting" to illegal activities, like accepting a bribe. Other instances of this kind of AI that are already possible include cases of misleading spoken dialogue. Then, the lips of someone saying something offensive get placed onto someone else. In one of the best-known examples of Deepfake manipulation, BuzzFeed published a clip now widely known as "ObamaPeele." It combined a video of President Obama with film director Jordan Peele's lips. The result made it seem as if Obama cursed and said things he never would in public. Deepfakes are real enough to cause action The advanced deepfake efforts that cybersecurity analysts warn about rely on AI to create something so real that it causes people to act. For example, in March of 2019, the CEO of a British energy firm received a call from what sounded like his boss. The message was urgent — the executive needed to transfer a large amount of funds to a Hungarian supplier within the hour. Only after the money was sent did it become clear the executive’s boss was never on the line. Instead, cybercriminals had used AI to generate an audio clip that mimicked his boss’s voice. The criminals called the British man and played the clip, convincing him to transfer the funds. The unnamed victim was scammed out of €220,000 — an amount equal to $243,000. Reports indicate it's the first successful hack of its kind, although it's an unusual way for hackers to go about fooling victims. Some analysts point out other hacks like this may have happened but have gone unreported, or perhaps the people involved did not know hackers used this technology. According to Rüdiger Kirsch, a fraud expert at the insurance company that covered the full amount of the claim, this is the first time the insurer dealt with such an instance. The AI technology apparently used to mimic the voice was so authentic that it captured the parent company leader's German accent and the melody of his voice. Deepfakes capitalize on urgency One of the telltale signs of deepfakes and other kinds of spoofing — most of which currently happen online — is a false sense of urgency. For example, lottery scammers emphasize that their victims must send personal details immediately to avoid missing out on their prizes. The deepfake hackers used time constraints to fool this CEO, as well. The AI technology on the other end of the phone told the CEO that he needed to send the money to a Hungarian supplier within the hour, and he complied. Even more frighteningly, the deceiving tech was so advanced that hackers used it for several phone calls to the victim. One of the best ways to avoid scams is to get further verification from outside sources, rather than immediately responding to the person engaging with you. For example, if you're at work and get a call or email from someone in accounting who asks for your Social Security number or bank account details to update their records, the safest thing to do is to contact the accounting department yourself and verify the legitimacy. Many online spoofing attempts have spelling or grammatical errors, too. The challenging thing about voice trickery, though, is that those characteristics don't apply. You can only go by what your ears tell you. Since these kinds of attacks are not yet widespread, the safest thing to do for avoiding disastrous consequences is to ignore the urgency and take the time you need to verify the requests through other sources. Hackers can target deepfake victims indefinitely One of the most impressive things about this AI deepfake case is that it involved more than one phone conversation. The criminals called again after receiving the funds to say that the parent company had sent reimbursement funds to the United Kingdom firm. But, they didn't stop there. The CEO received a third call that impersonated the parent company representative again and requested another payment. That time, though, the CEO became suspicious and didn't agree. As, the promised reimbursed funds had not yet come through. Moreover, the latest call requesting funds originated from an Austrian phone number. Eventually, the CEO called his boss and discovered the fakery by handling calls from both the real person and the imposter simultaneously. Evidence suggests the hackers used commercially available voice generation software to pull off their attack. However, it is not clear if the hackers used bots to respond when the victim asked questions of the caller posing as the parent company representative. Why do deepfakes work so well? This deepfake is undoubtedly more involved than the emails hackers send out in bulk, hoping to fool some unsuspecting victims. Even those that use company logos, fonts and familiar phrases are arguably not as realistic as something that mimics a person's voice so well that the victim can't distinguish the fake from the real thing. The novelty of these incidents also makes individuals less aware that they could happen. Although many people receive training that helps them spot some online scams, the curriculum does not yet extend to these advanced deepfake cases. Making the caller someone in a position of power increases the likelihood of compliance, too. Generally, if a person hears a voice on the other end of the phone that they recognize as their superior, they won't question it. Plus, they might worry that any delays in fulfilling the caller's request might get perceived as them showing a lack of trust in their boss or an unwillingness to follow orders. You've probably heard people say, "I'll believe it when I see it." But, thanks to this emerging deepfake technology, you can't necessarily confirm the authenticity of something by hearing or seeing it. That's an unfortunate development, plus something that highlights how important it is to investigate further before acting. That may mean checking facts or sources or getting in touch with superiors directly to verify what they want you to do. Indeed, those extra steps take more time. But, they could save you from getting fooled. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube Now there is a Deepfake that can animate your face with just your voice and a picture using Temporal GANs
Read more
  • 0
  • 0
  • 4761

article-image-freecad-open-source-design-bleeding-edge
Michael Ang
31 Dec 2014
5 min read
Save for later

FreeCAD: Open Source Design on the Bleeding Edge

Michael Ang
31 Dec 2014
5 min read
Are you looking for software for designing physical objects for 3D printing or physical construction? Computer-aided design (CAD) software is used extensively in engineering when designing objects that will be physically constructed. Programs such as Blender or SketchUp can be used to design models for 3D printing but there’s a catch: it’s quite possible to design models that look great onscreen but don’t meet the "solid object" requirements of 3D printing. Since CAD programs are targeted at building real-world objects, they can be a better fit for designing things that will exist not just on the screen but in the physical world. D-printable Servo controlled Silly-String Trigger by sliptonic FreeCAD distinguishes itself by being open source, cross-platform, and designed for parametric modeling. Anyone is free to download or modify FreeCAD, and it works on Windows, Mac, and Linux. With parametric modeling, it’s possible to go back and change parameters in your design and have the rest of your design update. For example, if you design a project box to hold your electronics project and decide it needs to be wider, you could change the width parameter and the box would automatically update. FreeCAD allows you to design using its visual interface and also offers complete control via Python scripting. Changing the size of a hole by changing a parameter I recommend Bram De Vries’ FreeCAD tutorials on YouTube to help you get started with FreeCAD. The FreeCAD website has links to download the software and a getting started guide. FreeCAD is under heavy development (by a small group of individuals) so expect to encounter a little strangeness from time to time, and save often! If you’re used to using software developed by a large and well-compensated engineering team you may be surprised that certain features are missing, but on the other hand it’s really quite amazing how much FreeCAD offers in software that is truly free. You might find a few gaping holes in functionality, but you also won’t find any features that are locked out until you go "Premium". If you didn’t think I was geeky enough for loving FreeCAD, let me tell you my favorite feature: everything is scriptable using Python. FreeCAD is primarily written in Python and you have access to a live Python console while the program is running (View->Views->Python console) that you can use to interactively write code and immediately see the results. Scripting in FreeCAD isn’t through some limited programming interface, or with a limited programming language: you have access to pretty much everything inside FreeCAD using standard Python code. You can script repetitive tasks in the UI, generate new parts from scratch, or even add whole new "workbenches" that appear alongside the built-in features in the FreeCAD UI. Creating a simple part interactively with Python There are many example macros to try. One of my favorites allows you to generate an airfoil shape from online airfoil profiles. My own Polygon Construction Kit (Polycon) is built inside FreeCAD. The basic idea of Polycon is to convert a simple polygon model into a physical object by creating a set of 3D-printed connectors that can be used to reconstruct the polygon in the real world. The process involves iterating over the 3D model and generating a connector for each vertex of the polygon. Then each connector needs to be exported as an STL file for the 3D printing software. By implementing Polycon as a FreeCAD module I was able to leverage a huge amount of functionality related to loading the 3D model, generating the connector shapes, and exporting the files for printing. FreeCAD’s UI makes it easy to see how the connectors look and make adjustments to each one as necessary. Then I can export all the connectors as well-organized STL files, all by pressing one button! Doing this manually instead of in code could literally take hundreds of hours, even for a simple model. FreeCAD is developed by a small group of people and is still in the "alpha" stage, but it has the potential to become a very important tool in the open source ecosystem. FreeCAD fills the need for an open source CAD tool the same way that Blender and GIMP do for 3D graphics and image editing. Another open source CAD tool to check out is OpenSCAD. This tool lets you design solid 3D objects (the kind we like to print!) using a simple programming language. OpenSCAD is a great program–its simple syntax and interface is a great way to start designing solid objects using code and thinking in "X-Y-Z". My first implementation of Polycon used OpenSCAD, but I eventually switched over to FreeCAD since it offers the ability to analyze shapes as well as create them, and Python is much more powerful than OpenSCAD’s programming language. If you’re building 3D models to be printed or are just interested in trying out computer-aided design, FreeCAD is worth a look. Commercial offerings are likely going to be more polished and reliable, but FreeCAD’s parametric modeling, scriptability, and cross-platform support in an open source package are quite impressive. It’s a great tool for designing objects to be built in the real world. About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit used to bridge the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 4748

article-image-top-five-questions-to-ask-when-evaluating-a-data-monitoring-solution
Guest Contributor
27 Oct 2018
6 min read
Save for later

Top five questions to ask when evaluating a Data Monitoring solution

Guest Contributor
27 Oct 2018
6 min read
Massive changes are happening around the way IT services are consumed and delivered. Cloud-based infrastructure is being tied together and instrumented by DevOps processes, while microservices-driven apps are replacing monolithic architectures. This evolution is driving the need for greater monitoring and better analysis of data than we have ever seen before. This need is compounded by the fact that an application today may be instrumented with the help of sensors and devices providing users with critical input in making decisions. Why is there a need for monitoring and analysis? The placement of sensors on practically every available surface in the material world – from machines to humans – is a reality today. Almost anything that is capable of giving off a measurable metric or recorded event can be instrumented, in the virtual world as well as the physical world, and has the need for monitoring. Metrics involve the consistent measurement of characteristics, such as CPU usage, while events are something that is triggered, such as temperature reaching above a threshold. The right instrumentation, observation and analytics are required to create business insight from the myriad of data points coming from these instruments. In the virtual world, monitoring and controlling software components that drive business processes is critical. Data monitoring in software is an important aspect of visualizing what systems are doing – what activities are happening, and precisely when – and how well the applications and services are performing. There is, of course, a business justification for all this monitoring of constant streams of metrics and events data. Companies want to become more data-driven, they want to apply data insights to be better situationally aware of business opportunities and threats. A data-driven organization is able to predict outcomes more effectively than relying on historical information, or on gut instinct. When vast amounts of data points are monitored and analyzed, the organization can find interesting “business moments” in the data. These insights help identify emerging opportunities and competitive advantages. How to develop a Data monitoring strategy Establishing an overall IT monitoring strategy that works for everyone across the board is nearly impossible. But it is possible to develop a monitoring strategy which is uniquely tailored to specific IT and business needs. At a high level, organizations can start developing their Data monitoring strategy by asking these five fundamental questions: #1 Have we considered all stakeholder needs? One of the more common mistakes DevOps teams make is focusing the monitoring strategies on the needs of just a few stakeholders and not addressing the requirements of stakeholders outside of IT operations, such as line of business (LOB) owners, application developers and owners, and other subgroups within operations, such as network operations (NOC) or communications teams. For example, an app developer may need usage statistics around application performance while the network operator might be interested in network bandwidth usage by that app’s users. #2 Will the data capture strategy meet future needs? Organizations, of course, must key on the data capture needs of today at the enterprise level, but at the same time, must consider the future. Developing a long-term plan helps in future-proofing the overall strategy since data formats and data exchange protocols always evolve. The strategy should also consider future needs around ingestion and query volumes. Planning for how much data will be generated, stored and archived will help establish a better long-term plan. #3 Will the data analytics satisfy my organization’s evolving needs? Data analysis needs always change over time. Stakeholders will ask for different types of analysis and planning ahead for those needs, and opting for a flexible data analysis strategy will help ensure that the solution is able to support future needs. #4 Is the presentation layer modular and embeddable? A flexible user interface that addresses the needs of all stakeholders is important for meeting the organization’s overarching goals. Solutions which deliver configurable dashboards that enable users to specify queries for custom dashboards meet this need for flexibility. Organizations should consider a plug-and-play model which allows users to choose different presentation layers as needed. #5 Does architecture enable smart actions? The ability to detect anomalies and trigger specific actions is a critical part of a monitoring strategy. A flexible and extensible model should be used to meet the notification preferences of diverse user groups. Organizations should consider self-learning models which can be trained to detect undefined anomalies from the collected data. Monitoring solutions which address the broader monitoring needs of the entire enterprise are preferred. What are purpose-built monitoring platforms Devising an overall IT monitoring strategy that meets these needs and fundamental technology requirements is a tall order. But new purpose-built monitoring platforms have been created to deal with today’s new requirements for monitoring and analyzing these specific metrics and events workloads – often called time-series data – and provide situational awareness to the business. These platforms support ingesting millions of data points per second, can scale both horizontally and vertically, are designed from the ground up to support real-time monitoring and decision making, and have strong machine learning and anomaly detection functions to aid in discovering interesting business moments. In addition, they are resource-aware, applying compression and down-sampling functions to aid in optimal resource utilization, and are built to support faster time to market with minimal dependencies. With the right strategy in mind, and tools in place, organizations can address the evolving monitoring needs of the entire organization. About the Author Mark Herring is the CMO of InfluxData. He is a passionate marketeer with a proven track record of generating leads, building pipeline, and building vibrant developer and open source communities. Data-driven marketeer with proven ability to define the forest from the trees, improve performance, and deliver on strategic imperatives. Prior to InfluxData, Herring was vice president of corporate marketing and developer marketing at Hortonworks where he grew the developer community by over 40x. Herring brings over 20 years of relevant marketing experience from his roles at Software AG, Sun, Oracle, and Forte Software. TensorFlow announces TensorFlow Data Validation (TFDV) to automate and scale data analysis, validation, and monitoring. How AI is going to transform the Data Center. Introducing TimescaleDB 1.0, the first OS time-series database with full SQL support.
Read more
  • 0
  • 0
  • 4728

article-image-web-development-tools-behind-large-portion-modern-internet-pornography
Erik Kappelman
20 Feb 2017
6 min read
Save for later

The Web Development Tools Behind A Large Portion of the Modern Internet: Pornography

Erik Kappelman
20 Feb 2017
6 min read
Pornography is one of, if not, the most common forms of media on the Internet, if you go by the number of websites or amount of data transferred. Despite this fact, Internet pornography is rarely discussed or written about in positive terms. This is somewhat unexpected, given that pornography has spurned many technological advances throughout its history. Many of the advances in video capture and display were driven by the need to make and display pornography better. The desire to purchase pornography on the Internet with more anonymity was one of the ways PayPal drew, and continues to draw, customers to its services. This blog will look into some of the tools being used by some of the more popular Internet pornography sites today. We will be examining the HTML source for some of the largest websites in this industry. The content of this blog will not be explicit, and the intention is not titillation. YouPorn is one of the top 100 accessed websites on the Internet; so, I believe it is relevant to have a serious conversation about the technologies used by these sites. This conversation does not have to be explicit in anyway, and it will not be. Much of what is in the <head> tag in the YouPorn HTML source is related to loading assets, such as stylesheets. After several <meta> tags, most designed to enhance the website’s SEO, a very large chunk of JavaScript appears. It is hard to say, at this point, whether or not YouPorn is using a common frontend framework, or if this JavaScript was wholly written by a developer somewhere. It certainly was minified before it was sent to the frontend, which is the least you would expect. This script does a variety of things. It handles that lovely popup that occurs as soon as a viewer clicks anywhere on the page; this is handled with vanilla JavaScript. The script also collects a large amount of information about the viewer’s viewing device. This includes information about the operating system, the browser, the device type, device brand, and even some information about the CPU. This information is used to optimize the viewer’s experience. The script also identifies whether or not the viewer is using AdBlock, and then modifies the page as such. Two third-party tools that are in this script are jQuery and AJAX. These two tools would be very necessary for a website that’s main purpose is the display of pornographic content. This is because AJAX can help expedite the movement of the content from backend to frontend, and jQuery can enhance DOM manipulation in order to improve the viewer’s user interface. AJAX and jQuery can also be seen in the source code of the PornHub website. Again, this is really the least you would expect from a website that serves as much content as any of the porn websites that are currently popular. The source code for these pages show that YouPorn and PornHub both use Google Analytics tools, presumably, to assist in their content targeting. This is a part of how pornography websites begin to grow more and more geared toward a specific viewer over time. PornHub and YouPorn spend a lot of lines of code building what could be considered a profile of their viewers. This way, viewers can see what they want immediately, which ought to enhance their experience and keep them online. xHamster follows a similar template as it identifies information about the user’s device and uses Google Analytics to target the viewer with specific content. Layout and navigation of any website is important. Although pornography is very desirable to some, websites that display it have so many competitors that they must all try very hard to satisfy their viewers. This makes every detail very important. YouPorn and PornHub appear to use BootStrap as the foundation of their frontend design. There is quite a bit of customization performed by the sites, but BootStrap is still in the foundation. Although it is somewhat less clear, it seems that xHamster also uses BootStrap as its design foundation. Now, let’s choose a video and see what the source code tells us about what happens when viewers attempt to interact with the content. On PornHub, there are a series of view previews that, when rolled over, a video sprite appears in order to give the viewer a preview of the video. Once the video is clicked on, the viewer is sent to a new page to view the specific video. In the case of PornHub, this is done through the execution of a PHP script that uses the video’s ID and an AJAX request to get the user onto the right page with the right video. Once we are on the video’s page, we can see that PornHub, and probably xHamster and YouPorn as well, are using Flash Video. I am viewing these websites on a MacBook, so it is likely that the video type is different when viewed on a device that does not support Flash Video. This is part of the reason so much information about a viewer’s device is collected upon visiting these websites. This short investigation into the tools used by these websites has revealed that, although pornographers have been on the cutting edge of web technology in the past, some of the current pornography providers are using tools that are somewhat unimpressive, or at least run of the mill. That being said, there is never a good reason to reinvent the wheel, and these websites are clearly doing fine in terms of viewership. For the aspiring developers out there, I would take it to heart that some of the most viewed websites on the Internet are using some of the most basic tools to provide their users with content. This confirms what I have often found to be true, that is, getting it done right is far more important than getting it done in a fancy way. I have only scratched the surface of this topic. I hope that others will investigate more into the type of technologies used by this very large portion of the modern Internet. Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 4721

article-image-how-serverless-computing-is-making-ai-development-easier
Bhagyashree R
12 Sep 2018
5 min read
Save for later

How Serverless computing is making AI development easier

Bhagyashree R
12 Sep 2018
5 min read
AI has been around for quite some time, enabling developers to build intelligent apps that cater to the needs of their users. Not only app developers, businesses are also using AI to gain insights from their data such as their customers’ buying behaviours, the busiest time of the year, and so on. While AI is all cool and fascinating, developing an AI-powered app is not that easy. Developers and data scientists have to invest a lot of their time in collecting and preparing the data, building and training the model, and finally deploying it in production. Machine learning, which is a subset of AI, feels difficult because the traditional development process is complicated and slow. While creating machine learning models we need different tools for different functionalities, which means we should have knowledge of them all. This is certainly not practical. The following factors make the current situation even more difficult: Scaling the inferencing logic Addressing continuous development Making it highly available Deployment Testing Operation This is where serverless computing comes into picture. Let’s dive into what exactly serverless computing is and how it can help in easing AI development. What is serverless computing? Serverless computing is the concept of building and running applications in which the computing resources are provided as scalable cloud services. It is a deployment model where applications, as bundle of functions, are uploaded to a cloud platform and then executed. Serverless computing does not mean that servers are no longer required to host and run code. Of course we need servers, but server management for the applications is taken care of by the cloud provider. This also does not implies that operations engineers are no longer required. In fact, it means that with serverless computing, consumers no longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams. This allows developers to focus on writing their business logic and operations engineers to elevate their focus to more business critical tasks. Serverless computing is the union of two ideas: Backend as a Service (BaaS): BaaS provides developers a way to link their application with third-party backend cloud storage. It includes services such as, authentication, access to database, and messaging, which are supplied through physical or virtual servers located in the cloud. Function as a Service (FaaS): FaaS allows users to run a specific task or function remotely and after the function is complete, the function results return back to the user. The applications run in stateless compute containers that are event-triggered and fully managed by a third party. AWS Lambda, Google Cloud Function, Azure Functions, and IBM Cloud Functions, are some of the serverless computing providers which enable us to upload a function and the rest is taken care for us automatically. Read also: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2 Why serverless is a good choice for AI development? Along with the obvious advantage of hassle free server management, let’s see what else it has to offer for your artificial intelligence project development: Focus on core tasks Managing servers and deploying a machine learning model is not a good skill match for a data scientist or even for a machine learning engineer. With serverless computing, servers will conveniently vanish from your development and deployment workflow. Auto-scalability This is one of the key benefits of using serverless computing. As long as your model is correctly deployed on the serverless platform, you don’t have to worry about making it scale when your workload raises. Serverless computing gives all businesses, big and small, the ability to use what they need and scale without worrying about complex and time-consuming data migrations. Never pay for idle In traditional application deployment models, users need to pay a fixed and recurring cost for compute resources, regardless of the amount of computing work that is actually being performed by the server. In serverless computing deployment, you only have to pay for service usage. You are only charged for the number of executions and the corresponding duration. Reduces interdependence You can think of machine learning models as functions in serverless, which can be invoked, updated, and deleted. You can do this any time without having any side effect on the rest of the system. Different teams can work independently to develop, deploy, and scale their microservices. This greatly simplifies the orchestration of timelines by Product and Dev Managers. Abstraction from the users Your machine learning model will be exposed as a service to the users with the help of API Gateway. This makes it easier to decentralize your backend, isolate failure on a per-model level, and hide every implementation details from the final user. High availability Serverless applications have built-in availability and fault tolerance. You don't need to architect for these capabilities since the services running the application provide them by default. Serverless computing can facilitate a simpler approach to artificial intelligence by removing the baggage of server maintenance from developers and data scientists. But nothing is perfect, right? It also comes with some drawbacks, number one being, vendor lock-in. Serverless features varies from one vendor to another, which makes it difficult to switch vendors. Another disadvantage is decreased transparency. Your infrastructure is managed by someone else, so understanding the entire system becomes a little bit difficult. Serverless is not an answer to every problem but it is definitely improving each day making AI development easier. What’s new in Google Cloud Functions serverless platform Serverless computing wars: AWS Lambdas vs Azure Functions Google’s event-driven serverless platform, Cloud Function, is now generally available
Read more
  • 0
  • 0
  • 4705

article-image-getting-started-devops
Michael Herndon
10 Feb 2016
7 min read
Save for later

Getting Started with DevOps

Michael Herndon
10 Feb 2016
7 min read
DevOps requires you to know many facets of people and technology. If you're interested in starting your journey into the world of DevOps, then take the time to know what you are getting yourself into, be ready to put in some work, and be ready to push out of your comfort zone. Know What You're Getting Yourself Into Working in a DevOps job where you're responsible for both coding and operational tasks means that you need to be able to shift mental gears. Mental context switching comes at a cost. You need to be able to pull yourself out of one mindset and switch to another, and you need to be able to prioritize. Accept your limitations and know when it's prudent to request more resources to handle the load. The amount of context switching will vary depending on the business. Let's say that you join a startup, and you're the only DevOps person on the team. In this scenario, you're most likely the operations team and still responsible for some coding tasks as well. This means that you need to tackle operations tasks as they come in. In this instance, Scrum and Agile will only carry you so far, you'll have to take more of a GTD approach. If you come from a development background, you will be tempted to put coding first as you have deadlines. However, if you are the operations team, then operations must come first. When you become a part of the operations team, employees at your business are now your customers too. Some days you can churn out code, other days are going to be an onslaught of important, time-sensitive requests. At the business that I currently work for, I took on the DevOps role so that other developers could focus on coding. One of the developers that I work with has exceptional code output. However, operational tasks were impeding their productivity. It was an obvious choice for me to jump in and take over the operational tasks so that the other developer could focus his efforts on bringing new features to customers. It's simply good business. Ego can get in the way of good business and DevOps. Leave your ego at home. In a bigger business, you may have a DevOps team where there is more breathing room to focus on things that you're more interested in, whether it's more coding or working with systems. Emergencies happen. When an emergency arises, you need to be able to calmly assess the situation, avoid the blame game, and provide solutions. Don't react. Respond. If you're too excitable or easily get caught up in the emotions of a given situation, DevOps may be your trial of fire. Work on pulling yourself outside of a situation so that you can see the whole picture and work towards solving the problem. Never play the blame game. Be the person who gets things done. Dive Into DevOps Start small. Taking on too much will overwhelm you and stifle progress. After you’ve done a few iterations of taking small steps, you'll be further along the journey than you realize. "It's a dangerous business, Frodo, going out your door. You step onto the road, and if you don't keep your feet, there's no knowing where you might be swept off to.” - Bilbo Baggins. If you're a developer, take one of your side projects and set up continuous delivery for the project. I would keep it simple and use something like Travis CI or AppVeyor and have your final output published somewhere. If you're using something like node, you could set up nightly builds for NPM. If its .NET you could use a service like MyGet. The second thing I would do as a developer is to focus on learning SSH, security access, and scheduled tasks. One of the things I've seen developers struggle with is locking down systems, so it's worth taking the time to dive into user access permissions. If you're on Windows, learn the windows task scheduler. If you're on Linux, learn to setup cron jobs. If you're from the operations and systems side of things, pick a scripting language that suits your needs. If you're working for a company that uses Microsoft technology, I'd suggest that you learn the Powershell scripting language and a language that compiles to .NET like C# or F#. If you're using open source technologies, I'd suggest learning bash and a language like Ruby or Python. Puppet and Chef use Ruby. Salt Stack uses Python. Build a simple web application with the language of your choice. That should give you enough familiarity with a language for you to start creating scripts that automate tasks. Read into DevOps books like Continuous Delivery or Continuous Delivery and DevOps Quickstart Guid. Expand your knowledge. Explore tools. Work on your intercommunication skills. Create a list of tasks that you wish to automate. Then create a habit out of reducing that list. Build A Habit Out Of Automating Infrastructure. Make it a habit to find time to automate your infrastructure while continuing to support your business. It's rare to get into a position that only focuses on automating infrastructure constantly as one's sole job, so it's important to be able to carve out time to remove mundane work so that you can focus your time and value on tasks that can't be automated. A habit loop is made up of three things. A cue, a routine, and a reward. For example, at 2pm your alarm goes off (cue). You go for a short run (routine). You feel awake and refreshed (reward). Design a cue that works for you. For example, every Friday at 2pm you could switch gears to work on automation. Spend some time on automating a task or infrastructure need (Routine), then find a reward that suits your lifestyle. A reward could be having a treat on Friday to celebrate all the hard work for the week or going home early (if your business permits this). Maybe learning something new is the reward and in that case, you spend a little time each week with a new DevOps related technology. Once you've removed some of the repetitive tasks that waste time, then you'll find yourself with enough time to take on bigger automation projects that seemed impossible to get to before. Repeat this process ad infinitum (To infinity and beyond). Lastly, Always Write and Communicate Whether you plan on going into DevOps or not, the ability to communicate will set you apart from others in your field. In DevOps, communication becomes a necessity because the value you provide may not always be apparent to everyone around you. Furthermore, you need to be able to resolve group conflicts, persuasively elicit buy-in, and provide a vision that people can follow. Always strive to improve your communication skills. Read books. Write. Work on your non-verbal communication skills. Non-verbal communication accounts for 93% of communication. It's worth knowing that messages that your body language sends could be preventing you from getting your ideas across. Communicating in a plain language to the lowest common denominator of your intended audience is your goal. People that are technical and nontechnical need to understand problems, solutions, and the value that you are giving them. Learn to use the right adjectives to paint bright illustrations in the minds of your readers to help them conceptualize hard-to-understand topics. The ability to persuade with writing is almost a lost art. It is a skill that transcends careers, disciplines, and fields of study. Used correctly, you can provide vision to guide your business into becoming a lean competitor that provides exceptional value to customers. At the end of the day, DevOps exists so that you can provide exceptional value to customers. Let your words guide and inspire the people around you. Off You Go All this is easier said than done. It takes time, practice, and years of experience. Don't be discouraged and don't give up. Instead, find things that light up your passion and focus on taking small incremental steps that allow you to win. You'll be there before you know it. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy. 
Read more
  • 0
  • 0
  • 4694
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-analyzing-the-chief-data-officer-role
Aaron Lazar
13 Apr 2018
9 min read
Save for later

GDPR is pushing the Chief Data Officer role center stage

Aaron Lazar
13 Apr 2018
9 min read
Gartner predicted that by 2020 90% of large organizations in regulated industries will have a Chief Data Officer role. With the recent heat around Facebook, Mark Zuckerberg, and the fast-approaching GDPR compliance deadline, it’s quite likely that 2018 will be the year of the Chief Data Officer. This article was first published in October, 2017 and has been updated to keep up with the latest trends in GDPR. In 2014 around 400 CEOs and top business execs were asked how they recognize data as a corporate asset. They responded with a mixed set of reactions and viewed the worth of data in their organization in varied ways. Now, in 2018 these reactions have drastically changed – more and more organizations have realized the importance of Data-as-an-Asset. More importantly, the European Union (EU) has made it mandatory that General Data Protection Regulation (GDPR) compliance be sought by the 25th of May, 2018. The primary reason for creating the Chief Data Officer role was to connect the ends between functional management and IT teams in an organization. But now, it looks like the CDO will also be primarily focusing on setting up and driving GDPR compliance, to avoid a fine up to €20 million or 4% of the annual global turnover of the previous year, whichever is higher. We’re going to spend a few minutes breaking down the Chief Data Officer role for you, revealing several interesting insights along the way. Let’s start with the obvious question. What might the Chief Data Officer’s responsibilities be? Like other C-suite execs, a Chief Data Officer is expected to have a well-blended mix of technical know-how and business acumen. Their role is very diverse and sometimes comes across as a pain to define the scope. Here are some of the key responsibilities of a Chief Data Officer: Data Policies and GDPR Compliance Data security is one of the most important elements that any business must consider. It needs to comply with regulatory standards and requirements of the country where it operates. A Chief Data Officer is be responsible for ensuring the compliance of policies across all branches of business and the associated compliance requirement taxonomies, on a global level. What is GDPR? If you’re thinking there were no data protection laws before the General Data Protection Regulation 2016/679, that’s not true. The major difference, however, is that GDPR focuses more on customer data privacy and protection. GDPR requirements will change the way organizations store, process and protect their customers’ personal data. They will need solutions for assessing, implementing and maintaining GDPR compliance, and that’s where Chief Data Officers fit in. Using data to gain a competitive edge Chief Data Officers need to have sound knowledge of the business’ customers, markets where they operate, and strong analytical skills to leverage the right data, at the right time, at the right place. This would eventually give the business an edge over its competitors in the market. For example, a ferry service could use data to identify the rates that customers would be willing to pay at a certain time of the day. Setting in motion the best practices of data governance Organizations span across the globe these days and often employees from different parts of the world work on the same data. This can often result in data moving through unconnected systems, and ending up as inefficient or disjointed pieces of business information. A Chief Data Officer needs to ensure that this information is aggregated and maintained in such a way that clear information ownership across the organization is established. Architecting future-proof data solutions A Chief Data Officer often acts like a Data Architect. They will sometimes take on responsibility for planning, designing, and building Big Data systems and ensuring successful integration with other systems in an organization. Designing systems that can provide answers to the user’s problems now and in the future is vital. Chief Data Officers are often found asking themselves key questions like how to generate data with maximum reusability, while also making sure it’s as accurate and relevant as possible. Defining Information Management tools Different business units across the globe tend to use different tools, technologies, and systems to work on, store, and share information on an enterprise level. This greatly affects a company’s ability to access and leverage data for effective decision making and various other duties. A Chief Data Officer is responsible for establishing data-oriented standards across the business and for driving all arms of the business to comply with the standards and embrace change, to ensure the integrity of data. Spotting new opportunities A Chief Data Officer is responsible for spotting new opportunities where the business can venture into through careful analysis of data and past records. For example, a motor company would leverage certain sales information to make an informed decision on what age group to target with their new SUV in-the-making, to maximize sales. These are just the tip of the iceberg when it comes to a Chief Data Officer’s responsibilities. Responsibilities go hand-in-hand with skill and traits required to execute those responsibilities. Below are some key capabilities sought after in a CDO. Key Chief Data Officer Skills The right person for the job is expected to possess impeccable leadership and C-Suite level communication skills, as well as strong business acumen. They are expected to have strong knowledge of GDPR software tools and solutions to enable the organizations hiring them to swiftly transform and adopt the new regulations. They are also expected to possess knowledge of IT architecture, including a familiarity with leading architectural standards such as TOGAF or the Zachman framework. They need to be experienced in driving data governance as well as data quality and integrity, while also possessing a strong knowledge of data analytics, visualization, and storytelling. Familiar with Big Data solutions like Hadoop, MapReduce, and HBase is a plus. How much do Chief Data Officers earn?  Now, let’s take a look at what kind of compensation a Chief Data Officer is likely to be offered. To tell you the truth, the answer to this question is still a bit hazy, but it’s sure to pick up speed with the recent developments in the regulatory and legal areas related to data. About a year ago, a blog post from careeraddict revealed that the salary for a CDO in the US was around $112,000 annually. A job listing seen on Indeed quoted $200,000 as the annual salary. Indeed shows 7 jobs posted for a CDO in the last 15 days. We took the estimated salaries of CDOs and compared them with those of CIOs and CTOs in the same company. It turns out that most were on par, with a few CDO compensations falling slightly short of the CIO and CTO salary. These are just basic salary figures. Bonuses add on the side, amounting up to 50% in some cases. However, please note that these salary figures vary heavily based on the type of organization and the industry. Do businesses even need a Chief Data Officer? One might argue that some of the skills expected of a Chief Data Officer would also be held by the CIO or the Chief Digital Officer of the organization or the Data Protection Officer (if they have one). Then, why have a Chief Data Officer at all and incur an extra significant cost to the company? With the rapid change in tech and the rate at which data is generated, used, and discarded, most data pointers, point in the direction of having a separate Chief Data Officer, working alongside the CIO. It’s critical to have a clearly defined need for both roles to co-exist. Blurring the boundaries of the two roles can be detrimental and organizations must, therefore, be painstakingly mindful of the defined KRAs. The organisation should clearly define the two roles to keep the business structure running smoothly. A Chief Data Officer’s main focus will be on the latest data-centric technological innovations, their compliance to the new standards while also boosting customer engagement, privacy and in turn, loyalty and the business’s competitive advantage. The CIO, on the other hand, focuses on improving the bottom line by owning business productivity metrics, cost-cutting initiatives, making IT investments etc. – i.e., an inward facing data-management and architecture role. The CIO is the person who is therefore responsible for leading digital initiatives at a board level. In addition to managing data and governing information, if the CIO’s responsibilities were to also include implementing analytics in fresh ways to generate value for the business, it is going to be a tall order. To put it simply, it is more practical for the CIO to own the systems and the CDO to oversee all the bits and bytes that flow through these systems. Moreover, in several cases, the Chief Data Officer will act as a liaison between the business and IT. Thus, Chief Data Officers and CIOs both need to work together and support each other for a better functioning business. The bottom line: a Chief Data Officer is essential For an organization dealing with a lot of data, a Chief Data Officer is a must. Failure to have one on board can result in being fined €10 million euros or 2% of the organization’s worldwide turnover (depending on which is higher). Here are the criteria for an organization to have a dedicated personnel managing Data Protection. The organization’s core activities should: Have data processing operations which require regular and systematic monitoring of data subjects on a large scale or monitoring of individuals Be processing a large scale of special categories of data (i.e. sensitive data such as health, religion, race, sexual orientation etc.) Have data processing carried out by a public authority or a body processing personal data, except for courts operating in their judicial capacity Apart from this mandate, a Chief Data Officer can add immense value by aligning data-driven insights with its vision and goals. A CDO can bridge the gap between the CMO and the CIO, by focusing on meeting customer requirements through data-driven products. For those in data and insights centric roles such as data scientists, data engineers, data analysts and others, the CDO is a natural destination for their career progression journey. The Chief Data Officer role is highly attractive in terms of the scope of responsibilities, the capabilities and of course, the pay. Certification courses like this one are popping up to help individuals shape themselves for the role. All-in-all, this new C-suite position in most organizations, is the perfect pivot between old and new, bridging silos, and making a future where data privacy is intact.
Read more
  • 0
  • 0
  • 4667

article-image-best-angular-yet-new-features-angularjs-13
Sebastian Müller
16 Apr 2015
5 min read
Save for later

The best Angular yet - New Features in AngularJS 1.3

Sebastian Müller
16 Apr 2015
5 min read
AngularJS 1.3 was released in October 2014 and it brings with it a lot of new and exciting features and performance improvements to the popular JavaScript framework. In this article, we will cover the new features and improvements that make AngularJS even more awesome. Better Form Handling with ng-model-options The ng-model-options directive added in version 1.3 allows you to define how model updates are done. You use this directive in combination with ng-model. Debounce for Delayed Model Updates In AngularJS 1.2, with every key press, the model value was updated. With version 1.3 and ng-model-options, you can define debounce time in milliseconds, which will delay the model update until the user hasn’t pressed a key in the configured time. This is mainly a performance feature to save $digest cycles that would normally occur after every key press when you don’t use ng-model-options: <input type="text" ng-model="my.username" ng-model-options="{ debounce: 500 }" /> updateOn - Update the Model on a Defined Event An alternative to the debounce option inside the ng-model-options directive is updateOn. This updates the model value when the given event name is triggered. This is also a useful feature for performance reasons. <input type="text" ng-model="my.username" ng-model-options="{ updateOn: 'blur' }" /> In our example, we only update the model value when the user leaves the form field. getterSetter - Use getter/setter Functions in ng-model app.js: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { var myEmail = '[email protected]'; $scope.user = { email: function email(newEmail) { if (angular.isDefined(newEmail)) { myEmail = newEmail; } return myEmail; } }; }]); index.html: <div ng-app="myApp" ng-controller="MyController"> current user email: {{ user.email() }} <input type="email" ng-model="user.email" ng-model-options="{ getterSetter: true }" /> </div> When you set getterSetter to true, Angular will treat the referenced model attribute as a getter and setter method. When the function is called with no parameter, it’s a getter call and AngularJS expects that you return the current assigned value. AngularJS calls the method with one parameter when the model needs to be updated. New Module - ngMessages The new ngMessages module provides features for a cleaner error message handling in forms. It’s a feature that is not contained in the core framework and must be loaded via a separate script file. index.html: … <body> ... <script src="angular.js"></script> <script src="angular-messages.js"></script> <script src="app.js"></script> </body> app.js: // load the ngMessages module as a dependency angular.module('myApp', ['ngMessages']);  The first version contains only two directives for error message handling: <form name="myForm"> <input type="text" name="myField" ng-model="myModel.field" ng-maxlength="5" required /> <div ng-messages="myForm.myField.$error" ng-messages-multiple> <div ng-message="maxlength"> Your field is too long! </div> <div ng-message="required"> This field is required! </div> </div> </form> First, you need a container element that has an “ng-messages” directive with a reference to the $error object of the field you want to show error messages for. The $error object contains all validation errors that currently exist. Inside the container element, you can use the ng-message directive for every error type that can occur. Elements with this directive are automatically hidden when no validation error for the given type exists. When you set the “ng-messages-multiple” attribute on the element, you are using the “ng-messages” directive and all validation error messages are displayed at the same time. Strict-DI Mode AngularJS provides multiple ways to use the dependency injection mechanism in your application. One way is not safe to use when you minify your JavaScript files. Let’s take a look at this example: angular.module('myApp', []).controller('MyController', function($scope) { $scope.username = 'JohnDoe'; }); This example works perfectly in the browser as long as you do not minify this code with a JavaScript minifier like UglifyJS or Google Closure Compiler. The minified code of this controller might look like this: angular.module('myApp', []).controller('MyController', function(a) { a.username = 'JohnDoe'; }); When you run this code in your browser, you will see that your application is broken. Angular cannot inject the $scope service anymore because the minifier changed the function parameter name. To prevent this type of bug, you have to use this array syntax: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { $scope.username = 'JohnDoe'; }]); When this code is minified by your tool of choice, AngularJS knows what to inject because the provided string ‘$scope’ is not rewritten by the minifier: angular.module('myApp', []).controller('MyController', ['$scope', function(a) { a.username = 'JohnDoe'; }]); Using the new Strict-DI mode, developers are forced to use the array syntax. An exception is thrown when they don’t use this syntax. To enable the Strict-DI mode, you have to add the ng-strict-di directive to the element that you are using for the ng-app directive: <html ng-app="myApp" ng-strict-di> <head> </head> <body> ... </body> </html> IE8 Browser Support Angular 1.2 had built-in support for Internet Explorer 8 and up. Now that the global market share of IE8 has dropped and it takes a lot of time and extra code to support the browser, the team decided to drop support for the browser that was released back in 2009. Summary This article shows only a few new features added to Angular1.3. To learn about all of the new features, read the changelog file on Github or check out the AngularJS 1.3 migration guide. About the Author Sebastian Müller is Senior Software Engineer at adesso AG in Dortmund, Germany. He spends his time building Single Page Applications and is interested in JavaScript Architectures. He can be reached at @Sebamueller on Twitter and as SebastianM on Github.
Read more
  • 0
  • 0
  • 4654

article-image-sally-hubbard-on-why-tech-monopolies-are-bad-for-everyone-amazon-google-and-facebook-in-focus
Natasha Mathur
24 Nov 2018
8 min read
Save for later

Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus

Natasha Mathur
24 Nov 2018
8 min read
When people talk about tech giants such as Amazon, Facebook, and Google, they usually talk about the great and powerful innovations that they’ve brought to the table, that have perpetually transformed the contemporary world. Of late, criticism of these same tech titans holding back the power of innovation from other smaller companies as they have become, what you may call, a tech monopoly has been gain traction. In a podcast episode of Innovation For All, Sheana Ahlqvist talked to Sally Hubbard, an antitrust expert, and investigative journalist at The Capitol Forum, regarding tech giants building monopolies. Here are some key highlights from the podcast.   Let’s recall the definition of monopoly. “A market structure characterized by a single seller, selling a unique product in the market. In a monopoly market, the seller faces no competition, as he is the sole seller of goods with no close substitute. Monopoly market makes the single seller the market controller as well as the price maker. He enjoys the power of setting the price for his goods”. In a nutshell, decrease the prices of your service and drive everyone else out of the business. A popular example is John D Rockefeller, Standard Oil’s chief executive, who ruined other competitors by cutting the prices of the oil until they went bankrupt, immediately after which the higher prices returned. Now although there is no price-fixing in the case of Google or Facebook since they offer completely free services, they’re still a monopoly. Let’s have a look. How are Amazon, Google, and Facebook tech monopolies? If you look at each one of these organizations - Amazon, Facebook, and Google have carved out their own markets, with gargantuan and durable market power vested in the hands of each one of them. According to the US Department of Justice, a market share of greater than 50% has been necessary for courts to find the existence of monopolistic power. A dominant market share is a useful starting point in determining monopoly power. Going by this rule, Google has dominated the search engine market, maintaining an 86.02 % market share as of July 2018, as per Statista. This is way over 50%, making Google a monopoly. The majority of Google revenues are generated through advertising. Similarly, Facebook dominates the social media market, with its worldwide market share of 66.67%, making it a monopoly too. Amazon, on the other hand, has 41% market share in the e-commerce retail market which is expected to increase significantly to 50% of the entire e-commerce retail market’s GMV, by 2021. This brings it pretty close to being a monopoly soon in the e-commerce market soon. Another factor that is considered under the Sherman Act, a part of the antitrust law, when identifying a firm that possesses monopoly power, is the existence of anti-competitive effect i.e. companies trying to maintain or acquire a dominant position by excluding competitors or preventing new entry. One recent example that comes to mind is when Google was fined with $5 billion in July this year for breaching EU’s antitrust laws. Google was fined for 3 types of illegal restrictions on the use of Android, cementing the dominance of its search engine. As per EU, Google denied its rivals a chance to innovate and compete on merits, which is illegal under EU’s antitrust laws. Also Read: A quick look at E.U.’s antitrust case against Google’s Android Monopolies and Social Injustice Hubbard points out how these tech titans don’t have any major competitors or substitutes, and even if you don’t pay most of these tech giants with money, you pay them with your data. This is more than enough for world domination, which is always an underlying aspiration for tech companies as they strive to be “the one” in the eyes of their customers, by carefully leveraging their data. This data also put these companies at an unfair advantage over other smaller and newer businesses. As Clive Humby, a British mathematician rightly said, “data is the new oil” in the digital economy. Hubbard explains how the downsides of this monopoly might not be visible to the consumer but affects entrepreneurs and small businesses who are greatly harmed by the practices of these companies. Taking Amazon, for instance, no one wishes to be dependent on their competitor, however, since Amazon has integrated the service of selling products on its platform, not only is everyone competing against Amazon but are also dependent on Amazon, as it is Amazon who decides the rules for the sellers. Add to this the fact that Amazon comprises a ginormous amount of consumer data in hand, putting it in an unfair advantage over others as it can dominate its products over others. There is currently an ongoing EU investigation into Amazon’s use of consumer and seller data collected on its platform to better its own products sold on its platform. Similarly, Google’s monopoly is evident in the fact that it gets to decide the losers and winners of the internet on its Google search, prioritizing its products over the others. An example of this is Google getting fined with 2.7 billion dollars by EU, last year after it ruled the company had abused its power by promoting its own shopping comparison service at the top of search results. Facebook, on the other hand, doesn’t have a direct competition, leaving users with less choice in terms of Social network sites, making it a monopoly. Add to that the fact that other major social media platforms such as Instagram and Whatsapp are also owned by Facebook. Hubbard explains how Facebook doesn't have competition, so it can prioritize its profits over the other factors such as user data as it's really not concerned about user loss. This is evident in the number of scandals that Facebook has gotten itself into regarding user data.  Facebook is facing a whole lot of data and privacy-related controversies, Cambridge Analytica scandal being the most popular one. Facebook suffered the largest security breach in its history that left 50M user accounts compromised, last month. Department of Housing and Urban Development UD) filed a complaint against Facebook in August, alleging the platform is selling ads that discriminate against users based on race, religion, and sexuality. ACLU also sued Facebook in September for enabling sex and age discrimination through targeted ads. Last week, the New York Times published a bombshell report on how Facebook has been following the strategy of ‘delaying, denying and deflecting’ the blame under the leadership of Sheryl Sandberg for all the controversies surrounding it. Scandals aside, even if a user finds the content hosted by Facebook displeasing, they don’t really have a choice to “stop using Facebook” as their friends and family continue to use the platform to stay in touch. Also, Facebook charges advertisers depending on how many people see a message instead of being based on ad clicks. This is why Facebook’s algorithm is programmed in a way that it prioritizes more engaging branded content and ads over the others. Monopoly and Gender Inequality As the market power of these tech giants increases, so does their wealth. Hubbard points out that the wealth from the many among the working and middle classes get transferred to the few belonging to the 1% and 0.1% at the top of the income and wealth distribution. The concentration of market power hurts workers and results in depresses wages, affecting women and other minority workers the most. “When general wages go down or stagnate, female workers are even worse off. Women make 78 cents to a man’s dollar, with black women making 64 cents and Latina women making 54 cents for every dollar a white man makes. As wages by the bottom 99% of earners continue to shrink, women get paid a mere percentage of fewer dollars. And the top 1% of earners are predominantly men”, mentions Sally Hubbard. There have also been declines in employee mobility as there are lesser firms competing due to giant firms acquiring smaller firms. This leads to reduced bargaining power in the hands of an employee. Moreover, these firms also t impose non-compete clauses and no-poach agreements putting a damper on workers’ ability to switch jobs. As eloquently put by Hubbard, “these tech platforms are the ones controlling the rules of the arena in which the game is played and are also the ones playing the game”. Taking into consideration this analogy, it’s anyone’s guess who’ll win the game. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? How far will Facebook go to fix what it broke: Democracy, Trust, Reality Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 4651

article-image-4-gaming-innovations-are-impacting-all-tech
Raka Mahesa
26 Apr 2017
5 min read
Save for later

4 Gaming innovations that are impacting all of tech

Raka Mahesa
26 Apr 2017
5 min read
Video games are a medium that sits at the intersection of entertainment, art, and technology. Considering that video games are a huge industry with over $90 billion in yearly revenues and how the various fields of technology are connected to each other, it makes sense that video games also have an impact on other industries, doesn't it? So let's talk about how gaming has expanded beyond its own industry.  Innovation in hardware  For starters, video games are a big driver in the computer hardware industry. People who mostly use their computer for working with documents, or for browsing the Internet don't really need high-end hardware. A decent processor, an okay amount of RAM, and just a few hundred gigabytes of storage is all they need to have their computers working for them. On the other hand, people who use their computer to play games need high-end hardwareto play the latest games.  These gamers want to play games in the best possible setting, so they demand a GPU that can render their games quickly. This leads to a tight competition between graphic card companies who try their best to produce the most capable GPU at the lowest price possible.  And it's not just GPU. Unlike movies with their 24 frames per second, games can have a higher number of frames per second. Because games with a high number of FPS have better animation, hardware makers have started to produce computer monitors with higher refresh rates that can show more frames per second. They've also produced auxiliary hardware (i.e. keyboards, etc.) that are more sensitive to user input because competitive gamers really appreciate all the extra precision they can get from their hardware.  In short, video games have spurred various innovations in computer hardware technology. And it's simply because those innovations provide users with a better gaming experience.  One of the interesting parts in this aspect is how the progress looks like a loop. When a game developer produces a video game that requires the most advanced hardware, hardware manufacturers then create better hardware that can render a game more efficiently. Then, game developers notice this additional capability and make sure their next game uses this extra resource, and so on. This endless cycle is the fuel that keeps computer hardware progressing. Innovation in AI research and technology Another interesting aspect is how the pursuit of a better GPU has benefitted the research of artificial intelligence. Unlike your usual application, artificial intelligence usually runs their processes in parallel instead of sequentially. Modern day CPU, unfortunately, isn't really constructed to run hundreds of processes at the same time. On the other hand, GPUs are designed to process multiple pixels at the same time, which makes them the perfect hardware to run artificial intelligence.  So, thanks to the progress in GPU technology, you don't need a special workstation to run your artificial intelligence project anymore. You just need to hook an off-the-shelf GPU to your PC and your artificial intelligence is ready to run, making AI research accessible to anyone with a computer. And because video games are a big factor in the progress of graphics hardware, we can say that video games have made an indirect impact in the accessibility of AI technology.  Innovation in virtual reality and augmented reality Another field that video games have made an impact on is virtual and augmented reality. One of the reasons that virtual reality and augmented reality are making a comeback in recent years is because consumer graphics hardware is now powerful enough to run VR apps. As you may know, VR apps require hardware that's more powerful than your usual mainstream computer. Fortunately, gaming computers nowadays are powerful enough to run those VR apps without causing motion sickness. Even Facebook, who isn't really a gaming company, focuses their VR effort on video games because right now the only computer that can run VR properly is a gaming computer.  And it's not just VR and AR. These days, when a new platform is launched, its ability to play video games usually becomes one of its selling points. When AppleTV was launched, its capability to play games was highlighted. Microsoft also had a big showcase using Hololens and Minecraft to demonstrate how the device would work. Video games have become one of the default ways for companies to demonstrate the capabilities of their devices, and to attract more developers to their platform.  Innovation beyond technology  The impact of video games isn't limited to only technological fields. Many have found video games to be an effective teaching and therapeutic tool. For example, soldiers in the army are encouraged to play military shooter games during their off-duty time, so they can stay in a soldier mindset even when they're not on duty. As for therapy, many studies have found that video games can be a great aid in treating patients with traumatic disorders as well as improving autistic patients' social skills.  These fields are just a sample of those that have benefited, and innovated, from the gaming industry. There are still many other fields in which games have made an impact, including: serious games, gamification, simulation, and more.  About the author  Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 4622
article-image-exploring-language-improvements-c-72-and-73-0
Mark J.
28 Nov 2017
9 min read
Save for later

Exploring Language Improvements in C# 7.2 and 7.3

Mark J.
28 Nov 2017
9 min read
With the C# 7 generation, Microsoft has decided to increase the cadence of language releases, releasing minor version numbers, aka point releases, for the first time since C# 1.1. This allows new features to be used by programmers faster than ever before, but the policy poses a challenge to writers of books about C#. Introduction One of the hardest parts of writing for technology is deciding when to stop chasing the latest changes and adding new content. Back in March 2017, I was reviewing the final drafts of the second edition of my book, C# 7 and .NET Core – Modern Cross-Platform Development. In Chapter 2, Speaking C# I got to the topic of representing number literals. One of the improvements in C# 7 is the ability to use the underscore character as a digit separator. For example, when writing large numbers in decimal you can improve the readability of number literals using underscores, and you can express binary or hexadecimal number literals by prefixing the number literal with 0b or 0x, as shown in the following code: // C# 6 and earlier int decimalNotation = 2000000; // 2 million // C# 7 and 7.1 int decimalNotation = 2_000_000; // 2 million int binaryNotation = 0b0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x001E_8480; // 2 million But in the final draft I hadn't included code examples of using underscores in number literals. At the last minute, I decided to add the preceding examples to the book. Unfortunately, I assumed that the underscore could be used to separate the prefixes 0b and 0x from the digits, and did not check the code examples would compile until the following day, after the book had gone to print. I had to release an erratum on the book's web page before it even reached the shelves. I felt so embarrassed. In the third edition, C# 7.1 and .NET Core 2.0 – Modern Cross-Platform Development, I fixed the code examples by removing the unsupported underscores after the prefixes since they are not supported in C# 7 or C# 7.1. Ironically, just as the third edition was due to go to print, Microsoft released C# 7.2, which adds support for using an underscore after the prefixes, as shown in the following code: // C# 7.2 and later int binaryNotation = 0b_0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x_001E_8480; // 2 million Gah! Clearly, I wasn't the only programmer who thought it is natural to be able to use underscores after the 0b or 0x prefixes. For the third edition, I decided not to make any last-minute changes to the book. This was partly because I didn't want to risk making a mistake again, and also because the code examples do work, they just don't show the latest improvement. Maybe in the fourth edition I will finally get the whole book perfect! But, of course, in the programming world that's impossible. Since the third edition covers C# 7.1, I have written this article to cover the improvements in C# 7.2 that are available today, and to preview the improvements coming early in 2018 with C# 7.3. Enabling C# 7 point releases Developer tools like Visual Studio 2017, Visual Studio Code, and the dotnet command line interface assume that you want to use the C# 7.0 language compiler by default. To use the improvements in a C# point release like 7.1 or 7.2, you must add a configuration element to the project file, as shown in the following markup: <LangVersion>7.2</LangVersion> Potential values for the <LangVersion> markup are shown in the following table: LangVersion Description 7, 7.1, 7.2, 7.3, 8 Entering a specific version number will use that compiler if it has been installed. default Uses the highest major number without a minor number, for example, 7 in 2017 and 8 later in 2018. latest Uses the highest major and highest minor number, for example, 7.2 in 2017, 7.3 early in 2018, 8 later in 2018. To be able to use C# 7.2, either install Visual Studio 2017 version 15.5 on Windows, or install .NET Core SDK 2.1.2 on Windows, macOS, or Linux from the following link: https://www.microsoft.com/net/download/ Run the .NET Core SDK installer, as shown in the following screenshot: Setting up a project for exploring C# 7.2 improvements In Visual Studio 2017 version 15.5 or later, create a new Console App (.NET Core) project named ExploringCS72 in a solution named Bonus, as shown in the following screenshot: You can download the projects created in this article from the Packt website or from the following GitHub repository: https://github.com/PacktPublishing/CSharp-7.1-and-.NET-Core-2.0-Modern-Cross-Platform-Development-Third-Edition/tree/master/BonusSectionCode/Bonus In Visual Studio Code, create a new folder named Bonus with a subfolder named ExploringCS72. Open the ExploringCS72 folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new console In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } } } In Visual Studio 2017, navigate to Debug | Start Without Debugging, or press Ctrl + F5. In Visual Studio Code, in Integrated Terminal, enter the following command: dotnet run You should see the following output, which confirms that you have successfully enabled C# 7.2 for this project: I was born in 1972. In Visual Studio Code, note that the C# extension version 1.13.1 (released on November 13, 2017) has not been updated to recognize the improvements in C# 7.2. You will see red squiggle compile errors in the editor even though the code will compile and run without problems, as shown in the following screenshot: Controlling access to type members with modifiers When you define a type like a class with members like fields, you control where those members can be accessed by applying modifiers like public and private. Until C# 7.2, there have been five combinations access modifier keywords. C# 7.2 adds a sixth combination, as shown in the last row of the following table: Access modifier Description private Member is accessible inside the type only. This is the default if no keyword is applied to a member. internal Member is accessible inside the type, or any type that is in the same assembly. protected Member is accessible inside the type, or any type that inherits from the type. public Member is accessible everywhere. internal protected Member is accessible inside the type, or any type that is in the same assembly, or any type that inherits from the type. Equivalent to internal_OR_protected. private protected Member is accessible inside the type, or any type that inherits from the type and is in the same assembly. Equivalent to internal_AND_protected. Setting up a .NET Standard class library to explore access modifiers In Visual Studio 2017 version 15.5 or later, add a new Class Library (.NET Standard) project named ExploringCS72Lib to the current solution, as shown in the following screenshot: In Visual Studio Code, create a new subfolder in the Bonus folder named ExploringCS72Lib. Open the ExploringCS72Lib folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new classlib Open the Bonus folder so that you can work with both projects. In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72Lib.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> In the class library, rename the class file from Class1 to AccessModifiers, and edit the class, as shown in the following code: using static System.Console; namespace ExploringCS72 { public class AccessModifiers { private int InTypeOnly; internal int InSameAssembly; protected int InDerivedType; internal protected int InSameAssemblyOrDerivedType; private protected int InSameAssemblyAndDerivedType; // C# 7.2 public int Everywhere; public void ReadFields() { WriteLine("Inside the same type:"); WriteLine(InTypeOnly); WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } public class DerivedInSameAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in same assembly:"); //WriteLine(InTypeOnly); // is not visible WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } } Edit the ExploringCS72.csproj file, and add the <ItemGroup> element to reference the class library in the console app, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> <ItemGroup> <ProjectReference Include="..ExploringCS72LibExploringCS72Lib.csproj" /> </ItemGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } public void ReadFieldsInType() { WriteLine("Inside a type in different assembly:"); var am = new AccessModifiers(); WriteLine(am.Everywhere); } } public class DerivedInDifferentAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in different assembly:"); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(Everywhere); } } } When entering code that accesses the am variable, note that IntelliSense only shows members that are visible due to access control. Passing parameters to methods In the original C# language, parameters had to be passed in the order that they were declared in the method. In C# 4, Microsoft introduced named parameters so that values could be passed in a custom order and even made optional. But if a developer chose to name parameters, all of them had to be named. In C# 7.2, you can mix named and unnamed parameters, as long as they are passed in the correct position. In Program.cs, add a static method, as shown in the following code: public static void PassingParameters(string name, int year) { WriteLine($"{name} was born in {year}."); } In the Main method, add the following statement: PassingParameters(name: "Bob", 1945); Visual Studio Code will show an error, as shown in the following screenshot, but the code will compile and execute. Optimizing performance with value types The fourth and final feature of C# 7.2 is working with value types while using reference semantics. This can improve performance in very specialized scenarios. You are unlikely to use them much in your own code, unless like Microsoft themselves, you create frameworks for other programmers to build upon that need to do a lot of memory management. You can learn more about these features at the following link: https://docs.microsoft.com/en-gb/dotnet/csharp/reference-semantics-with-value-types Conclusion I plan to refresh this bonus article when C# 7.3 is released to update it with the new features in that point release. Good luck with all your C# adventures!
Read more
  • 0
  • 0
  • 4618

article-image-notes-javascript-learner
Ed Gordon
30 Jun 2014
4 min read
Save for later

Notes from a JavaScript Learner

Ed Gordon
30 Jun 2014
4 min read
When I started at Packt, I was an English grad with a passion for working with authors, editorial rule, and really wanted to get to work structuring great learning materials for consumers. I’d edited the largest Chinese-English dictionary ever compiled without speaking a word of Chinese, so what was tech but a means to an end that would allow me to work on my life’s ambition? Fast forward 2 years, and hours of independent research and reading Hacker News, and I’m more or less able to engage in a high level discussion about any technology in the world, from Enterprise class CMIS to big data platforms. I can identify their friends and enemies, who uses what, why they’re used, and what learning materials are available on the market. I can talk in a more nebulous way of their advantages, and how they ”revolutionized” that specific technology type. But, other than hacking CSS in WordPress, I can’t use these technologies. My specialization has always been in research, analysis, and editorial know-how. In April, after deploying my first WordPress site (exploration-online.com), I decided to change this. Being pretty taken with Python, and having spent a lot of time researching why it’s awesome (mostly watching Monty Python YouTube clips), I decided to try it out on Codecademy. I loved the straightforward syntax, and was getting pretty handy at the simple things. Then Booleans started (a simple premise), and I realised that Python was far too data intensive. Here’s an example: · Set bool_two equal to the result of-(-(-(-2))) == -2 and 4 >= 16**0.5 · Set bool_three equal to the result of 19 % 4 != 300 / 10 / 10 and False This is meant to explain to a beginner how the Boolean operator “and” returns “TRUE” when statements on either side are true. This is a fairly simple thing to get, so I don’t really see why they need to use expressions that I can barely read, let alone compute... I quickly decided Python wasn’t for me. I jumped ship to JavaScript. The first thing I realised was that all programming languages are pretty much the same. Variables are more or less the same. Functions do a thing. The syntax changes, but it isn’t like changing from English to Spanish. It’s more like changing from American English to British English. We’re all talking the same, but there are just slightly different rules. The second thing I realized was that JavaScript is going to be entirely more useful to me in to the future than Python. As the lingua franca of the Internet, and the browser, it’s going to be more and more influential as adoption of browser over native apps increases. I’ve never been a particularly “mathsy” guy, so Python machine learning isn’t something I’m desperate to master. It also means that I can, in the future, work with all the awesome tools that I’ve spent time researching: MongoDB, Express, Angular, Node, and so on. I bought Head First JavaScript Programming, Eric T. Freeman, Elisabeth Robson, O’Reilly Media, and aside from the 30 different fonts used that are making my head ache, I’m finding the pace and learning narrative far better than various free solutions that I’ve used, and I actually feel I’m starting to progress. I can read things now and hack stuff on W3 schools examples. I still don’t know what things do, but I no longer feel like I’m standing reading a sign in a completely foreign language. What I’ve found that books are great at is reducing the copy/paste mind-set that creeps in to online learning tools. C/P I think is fine when you actually know what it is you’re copying. To learn something, and be comfortable using it in to the future, I want to be able to say that I can write it when needed. So far, I’ve learned how to log the entire “99 Bottles of Beer on the Wall” to the console. I’ve rewritten a 12 line code block to 6 lines (felt like a winner). I’ve made some boilerplate code that I’ve got no doubt I’ll be using for the next dozen years. All in all, it feels like progress. It’s all come from books. I’ll be updating this series regularly when I’ve dipped my toe into the hundreds of tools that JavaScript supports within the web developer’s workflow, but for now I’m going to crack on with the next chapter. For all things JavaScript, check out our dedicated page! Packed with more content, opinions and tutorials, it's the go-to place for fans of the leading language of the web. 
Read more
  • 0
  • 0
  • 4613

article-image-top-5-cybersecurity-trends-you-should-be-aware-of-in-2018
Vijin Boricha
11 Jul 2018
5 min read
Save for later

Top 5 cybersecurity trends you should be aware of in 2018

Vijin Boricha
11 Jul 2018
5 min read
Cybersecurity trends seem to be changing at an incredible rate. That poses new opportunities for criminals and new challenges for the professionals charged with securing our systems. High profile  attacks not only undermine trust in huge organizations, they also highlight a glaring gap in how we manage cybersecurity in a rapidly changing world. It also highlighted that attackers are adaptive and incredibly intelligent, evolving their techniques to adapt to new technologies and new behaviors. The big question is what the future will bring. What cybersecurity trends will impact the way cybersecurity experts work - and the way cybercriminals attack - for the rest of 2018 and beyond. Let’s explore some of the top cyber security trends and predictions of 2018: Artificial Intelligence and machine learning based cyber attacks and defenses AI and ML have started impacting major industries in various ways, but one of the most exciting applications is in cybersecurity. Basically, Artificial Intelligence and Machine Learning algorithms can learn from past events in order to help predict and identify vulnerabilities within a software system. They can also be used to detect anomalies in behavior within a network. A report from Webroot claims that more than 90% of cybersecurity professionals use AI to improve their security skills. However, while AI and machine learning can help security professionals, it is also being used by cybercriminals too. It seems obvious: if cyber security pros can use AI to identify vulnerabilities, so can people that seek to exploit them. Expect this back and forth to continue throughout 2018 and beyond. Ransomware is spreading like fire Storing data on the cloud has many benefits, but it can be an easy target for cyber criminals. Ransomware is one such technique - criminals target a certain area of data and hold it to ransom. It’s already a high profile cyber security concern. Just look at WannaCry, Petya, Meltdown, and Spectre, some of the biggest cyber security attacks in 2017. The bigger players (Google, AWS, and Azure) of the cloud market are trying to make it difficult for attackers, but smaller cloud service providers end up paying customers for data breaches. The only way these attacks can be reduced is by performing regular back-ups, updating security patches, and strengthening real-time defenses. Complying with GDPR GDPR (General Data Protection) is an EU regulation that tightens up data protection and privacy for individuals within the European Union. The ruling includes mandatory rules that all companies will have to follow when processing and storing personal data. From 25 May, 2018, General Data Protection (GDPR) will come into effect where important changes will be implemented to the current data protection directive. To mention a few it will include increased territorial scope,stricter consent laws, elevated rights and more. According to Forrester report 80% companies will fail to comply with GDPR out of which 50% would choose not to, considering the cost of compliance. Penalties for non-compliance would reach upto €20m or 4% of worldwide annual turnover, whichever is greater. The rise of Cyberwar Taking current cybersecurity scenario into consideration, there are high possibilities 2018 will be the year of international conflict in cyberspace. This may include cyber crimes on government and financial systems or their infrastructure and utilities. Chances are cyber-terrorism groups will target sensitive areas like banks, press, government, law-enforcement and more similar areas. The Ashley Madison attack – which involved attackers threatening to release personal information about users if the site was not shut down – shows that ideological motivated attacks are often very targeted and sophisticated with the goal of data theft and extortion. The attack on Ashley Madison is testament to the fact that companies need to be doing more as attackers become more motivated. You should not be surprised to see cyber-attacks going beyond financial benefits. The coming year can witness cyber crimes which are politically motivated that is designed to acquire intelligence to benefit a particular political entity. These methods can also be used to target electronic voting system in order to control public opinion. These kind of sophisticated attacks are usually well-funded and lead to public chaos. Governments will need to take extensive checks to ensure their network and ecosystem is well protected. Such instances might lead to loss of right to remain anonymous on the web. Like everything else, this move will also have two sides of the coin. Attacking cyber currencies and blockchain systems Since Bitcoin and Blockchain were booming in the year 2017, it becomes a crucial target area for hackers. Chances are attackers may target smaller blockchain systems who opt for weaker cryptographic algorithms to increase performance. On the other hand, the possibility of cryptographic attack against Bitcoin can be minimum. The major worry here would about attacking a block with minimum security practices, but eventually that block could lead to larger blockchain system. One of the major advantage for attackers here is they don’t really need to know who the opposite partner is, as only a verified participant is authorised to execute the trade. Here, trust or risk plays an important part and that is blockchain’s sweet spot. For example: Receiving payments in government issued currencies have higher possibilities of getting caught but there is a higher probability of succeeding in cryptocurrency payments. Well, this may be the end of this article but is not an end to the way things might turn out to be in 2018. We still stand midway through another year and the war of cyberthreats rages. Don’t be surprised to hear something different or new as malicious hackers keep trying newer techniques and methodologies to destroy a system. Related links WPA3: Next-generation Wi-Fi security is here The 10 most common types of DoS attacks you need to know 12 common malware types you should know
Read more
  • 0
  • 0
  • 4607
article-image-why-mobile-vr-sucks
Amarabha Banerjee
09 Jul 2018
4 min read
Save for later

Why mobile VR sucks

Amarabha Banerjee
09 Jul 2018
4 min read
If you’re following the news, chances are you’ve heard about Virtual Reality or VR headsets like Oculus, Samsung Gear, HTC Vive etc. Trending terms and buzzwords are all good for a business or tech that’s novel and yet to be adopted by the majority of consumers. But the proof of the pudding is when people have started using the tech. And the first reactions to mobile VR are not at all good. This has even made the founder of Oculus Rift, John Carmack to give a statement, “We are coasting on novelty, and the initial wonder of being something people have never seen before”. The jury is out on present day Mobile VR technologies and headsets -  ‘It Sucks’ in its present form. If you want to know why and what can make it better then read ahead. Hardware are expensive Mobile headsets are costly, mostly in the $399- $799 range. The most successful VR headset till date is Google Cardboard. The reason - it’s dirt cheap and it doesn’t need too much set up and customization. Such a high price at the initial launching phase of any tech is going to make the users worried. Not many people would want to buy an expensive new toy without knowing exactly how it’s going to be. VR games don’t match up to video game quality The initial VR games for mobile were very poor. There are 13 billion mobile gamers across the world, undeniably a huge market to tap into. But we have to keep in mind that these gamers have already access to high quality games which they can play just by tapping their mobile screen. For them to strap on that headset and get immersed in VR games, the incentive needs to be too alluring to resist. The current crop of VR games lack complexity, their UI design is not intuitive enough to hold the attention of a user for longer duration of time, especially when playing a VR game means strapping up that head gear. These VR games also take too much time to load which is a huge negative for VR games. The hype vs reality gap is improving, but it’s painfully slow The current phase of VR is the initial breakthrough stage where there are lot of expectations from it. But the games and apps are not upto the mark and hence those who have used it are giving it a thumbs down. The word of mouth publicity is mostly negative and this is creating a negative impact on mobile VR as a whole. The chart below shows the gap between initial expectation and the reality of VR and how it might shape up in the near future according to Unity's CEO John Riccitiello. AR vs VR vs MR: A concoction for confusion The popularity of Augmented Reality (AR) and the emergence of Mixed Reality - an amalgamation of both AR and VR have distracted the developers as per which platform and what methodology to adapt. The UX and UI design are quite different for both AR and VR and MR and hence all of these three disciplines would need dedicated development resources. For this to happen, these disciplines would have to be formalized first and until that time, the quality of the apps will not improve drastically. No unified VR development platform Mobile VR is dependant on SDKs and primarily on the two game engines Unity and Unreal Engine that have come up with support for VR game development. While Unity is one of the biggest names in game development industry, a dedicated and unified VR development platform is still missing in action. As for Unity and Unreal Engine their priority will not be VR any time soon. Things can change if and when some tech giant like Google, Microsoft, Facebook etc. will dedicate their resources to create VR apps and Games for mobile. Although Google has cardboard, Facebook unveiled React VR and support for AR development, Microsoft has their own game going on with Hololens AR and MR development, the trend that started it all still seems to be lost among its newer cousins. I think, VR will be big, but it will have to wait till its implementation by some major business or company. Till then, we will have to wear our ghastly headsets and imagine that we are living in the future. Game developers say Virtual Reality is here to stay Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint Build a Virtual Reality Solar System in Unity for Google Cardboard  
Read more
  • 0
  • 0
  • 4596

article-image-unity-machine-learning-agents-transforming-games-with-artificial-intelligence
Amey Varangaonkar
30 Mar 2018
4 min read
Save for later

Unity Machine Learning Agents: Transforming Games with Artificial Intelligence

Amey Varangaonkar
30 Mar 2018
4 min read
Unity has undoubtedly been one of the leaders when it comes to developing cross-platform products - going from strengths to strengths in developing visually stimulating 2D as well as 3D games and simulations.With Artificial Intelligence revolutionizing the way games are being developed these days, Unity have identified the power of Machine Learning and introduced Unity Machine Learning Agents. With this, they plan on empowering the game developers and researchers in their quest to develop intelligent games, robotics and simulations. What are Unity Machine Learning Agents? Traditionally, game developers have been hard-coding the behaviour of the game agents. Although effective, this is a tedious task and it also limits the intelligence of the agents. Simply put, the agents are not smart enough. To overcome this obstacle, Unity have simplified the training process for the game developers and researchers by introducing Unity Machine Learning Agents (ML-Agents, in short). Through just a simple Python API, the game agents can be now trained to use deep reinforcement learning, an advanced form of machine learning, to learn from their actions and modify their behaviour accordingly. These agents can then be used to dynamically modify the difficulty of the game. How do they work? As mentioned earlier, the Unity ML-Agents are designed to work based on the concept of deep reinforcement learning, a branch of machine learning where the agents are trained to learn from their own actions. Here is a simple flowchart to demonstrate how reinforcement learning works: The reinforcement learning training cycle The learning environment to be configured for the ML-agents consists of 3 primary objects: Agent: Every agent has a unique set of states, observations and actions within the environment, and is assigned rewards for particular events. Brain: A brain decides what action any agent is supposed to take in a particular scenario. Think of it as a regular human brain, which basically controls the bodily functions. Academy: This object contains all the brains within the environment To train the agents, a variety of scenarios are made possible by varying the connection of different components (explained above) of the environment. Some are single agents, some simultaneous single agents, and others could be co-operative and competitive multi-agents and more. You can read more about these possibilities on the official Unity blog. Apart from the way these agents are trained, Unity are also adding some cool new features in these ML-agents. Some of these are: Monitoring the agents’ decision-making to make it more accurate Incorporating curriculum learning, by which the complexity of the tasks can eventually be increased to aid more effective learning Imitation learning is a newly-introduced feature wherein the agents simply mimic the actions we want them to perform, rather than they learning on their own. What next for Unity Machine Learning Agents? Unity recently announced the release of v0.3 beta SDK of the ML-agents, and have been making significant progress in this domain to develop smarter, more intelligent game agents which can be used with the Unity game engine. Still very much in the research phase, these agents can also be used as an example by academic researchers to study the complex behaviour of trained models in different environments and scenarios where the variables associated with the in-game physics and visual appearance can be altered. Going forward, these agents can also be used by enterprises for large scale simulations, in robotics and also in the development of autonomous vehicles. These are interesting times for game developers, and Unity in particular, in their quest for developing smarter, cutting-edge games. Inclusion of machine learning in their game development strategy is a terrific move, although it will take some time for this to be perfected and incorporated seamlessly. Nonetheless, all the research and innovation being put into this direction certainly seems well worth it!
Read more
  • 0
  • 0
  • 4586