Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-getting-started-microsoft-dynamics-crm-2013-marketing
Packt
21 Apr 2014
11 min read
Save for later

Getting Started with Microsoft Dynamics CRM 2013 Marketing

Packt
21 Apr 2014
11 min read
(For more resources related to this topic, see here.) Present day marketing Marketing is the process of engaging with the target customers to communicate the value of a product or service in order to sell them. Marketing is used to attract new customers, nurture prospects, up-sell, and cross-sell to the existing customers. Companies spend at least five percent of their revenue on marketing efforts to maintain the market share. Any company that wants to grow its market share will spend more than 10 percent of its revenue on marketing. In competitive sectors, such as consumer products and services, the marketing expenditure can go up to 50 percent of the revenue, especially with new products and service offerings. Marketing happens over various channels such as print media, radio, television, and the Internet. Successful marketing strategies target specific audience with targeted messages at high frequency, which is very effective. Before the era of the Internet and social networks, buyers were less informed and the seller had better control over the sales pipeline by exploiting this ignorance. However, in this digital age, buyers are able to research beforehand to get enough information about the products they want and, ultimately, they control the process of buying. Social media has turned out to be a great marketing platform for companies, and it hugely impacts a company's reputation with respect to its products and customer services. Marketing with social media is about creating quality content that attracts the attention of the social platform users, who then share the content with their connections creating the same effect as word of mouth marketing. The target customers learn about the company and its products from their trusted connections. The promotional message is received from user's social circle and not the company itself. Social platforms such as Facebook and Twitter are constantly working towards delivering targeted ads to the users based on their interests and behaviors. Business Insider reports that Facebook generates 1 billion in revenue each quarter from advertisements, and Twitter is estimated to have generated more than 500 million in advertisement revenue in the year 2013, which clearly shows the impact of social media on marketing today. Buyers are able to make well-informed decisions and often choose to engage with a salesperson after due diligence. For example, when buying a new mobile phone, most of us know which model to buy, what the specifications are, and what the best price is in the market before we even go to the retailer. Marketing is now a revenue process that is not about broadcasting the product information to all, it's about targeting and nurturing relationships with the prospects from an early stage until they become ready to buy. Marketing is not just throwing bait and expecting people to buy it. The prospects in today's information age learn at their own pace and want to be reached only when they need more information or are ready to buy. Let's now explore some of the challenges of present day marketing. Marketing automation with Microsoft Dynamics CRM 2013 CRM has been passively used for a long time by marketers as a customer data repository and as a mining source for business intelligence reports as they perceived CRM to be a more sales focused customer data application. The importance of collaboration between the sales and marketing teams has inevitably evolved CRM into a Revenue Performance Management (RPM) platform with marketing features transforming it into a proactive platform. It can not only record data effectively, but also synthesize, correlate, and display the data with the powerful visualization techniques that can identify patterns, relationships, and new sales opportunities. The common steps involved in marketing with Microsoft Dynamics CRM are shown in the following figure: Important marketing steps in Microsoft Dynamcis CRM Targeting CRM can help in filtering and selecting a well-defined target population using advanced filtering and segmentation features on a clean and up-to-date data repository. It can select the prospects based on demographic data such as purchase history and responses to the previous campaigns, which will profile campaign distribution and significantly improve campaign performance. Microsoft Dynamics CRM 2013 can be easily integrated with other lines of business applications, which can help create intelligent marketing lists in CRM from various sources. For example, it can integrate with ERP and other financial software to segment customers into various marketing lists that target very specific customers and prospects. Work flows and automations supported by most of the CRM platforms can be used to build the logic for segmentation and creation of qualified lists. Targeting with Microsoft Dynamics CRM 2013 can help create groups that are likely to respond to certain types of campaigns and help marketers target customers with right campaign types. Automation and execution The CRM applications can help create, manage, and measure your marketing campaigns. It can track current status, messages sent, and responses received against each member of the list and measure real-time performance with reports and dashboards. The Microsoft Dynamics CRM 2013 systems can be used to plan and establish the budget for a campaign, track the expenses, and measure ROI. The steps involved in campaign execution and message distribution can be defined along with the schedule. Message distribution and response capture can be automated with CRM, which can help in running multiple promotions or performing nurture campaigns. Microsoft Dynamics CRM 2013 can help perform marketing tasks in parallel and track which prospect is responding to which campaign to establish the effectiveness of a campaign. With powerful integration with other marketing automation platforms, marketers can create and customize the message, create landing pages for the campaign within CRM, and then use the built-in e-mail marketing engine to distribute the message, which can embed tracking tokens into the e-mail to capture and relate the incoming response. Integration of CRM with popular e-mail clients avoids switching applications and errors in copying data back and forth. The Microsoft Dynamics CRM 2013 system can capture preferences, advise on the best time and channel to engage with the customer, and provide feedback on products and services. Close looping Close loop marketing is a practice of capturing and relating the responses to marketing messages in order to measure the effectiveness, constantly optimize the process, and refine your message to improve its relevancy. This, in turn, increases the rate of conversion and ROI. This also involves an inherent close looping between the marketing and sales teams who collaborate to provide a single view of progression from prospect to sale. The division between the marketing and sales departments leads to lack of visibility and efficiency as they are unable to support each other and cannot measure what works and what does not, eventually reducing the overall efficiency of both teams put together. Close loop marketing has gained great importance because companies have started perceiving the sales and marketing teams together as revenue teams who are jointly responsible to increase revenue. Close looping enables us to compare the outcome of multiple campaigns by multiple factors such as the campaign type, number of responses, type of respondents, and response time. Microsoft Dynamics CRM 2013 can track various parameters such as the types of messages and the frequency of marketing, which can be compared against prior marketing campaigns to identify trends and predict customer behavior. In order to achieve close loop marketing, we need to centralize data. This can bring together the customer's profile, customer's behavioral data, marketing activities, and the sales interactions in one place, so we can use automation to make this data actionable and continuously evolve the marketing processes for the targeting and nurturing of customers. CRM can be the centralized repository for data and can also automate the interactions between the sales and marketing teams. Also, the social CRM features allow users to follow specific records and create connections with unrelated records, which will enable free flow of information between the teams. This elicits great details about the customer and supports actionable use of information to increase revenue efficiently without resorting to marketing myths and assumptions. Revenue management by collaboration The marketing and sales teams together are the revenue team for an organization and are responsible to generate and increase revenue. It is imperative to align the sales and marketing teams for collaboration as the marketing team owns the message and the sales team owns the relationship. Microsoft Dynamics CRM 2013 offers an integrated approach where the lead can be passed from the marketing team to the sales team based on a threshold lead score or other qualification criteria agreed upon by both the sales and marketing teams. This qualification of the lead by the marketing team to the sales team retains all the previous interactions that the marketing team had with the lead, which helps the sales team understand the buyer's interests and motivation better by getting a 360 degree view of the customer. CRM tracks the status, qualification, and activities performed against the lead. This provides a comprehensive history of all the touch points with a lead and brings in transparency and accountability to both the marketing and sales teams. This ensures that only fully qualified leads are sent to sales, resulting in shorter sales cycle and improved efficiency. This strategic collaboration between the sales and marketing teams provides valuable feedback on the effectiveness of the marketing campaigns as well as the sales process. Microsoft Dynamics CRM 2013 can enable interdependence between the marketing and sales teams to share a common revenue goal and receive joint credits for achievement to become the organization's RPM system. To summarize, as a marketing automation platform, Microsoft Dynamics CRM 2013 can create marketing campaigns, identify target customers to create marketing lists, associate relevant products and promotional offers to the lists, develop tailored messages, distribute messages by various channels as per schedule, establish campaign budget and ROI forecast, capture the responses and inquiries while routing them to the right team, track progress and outcome of the sale, and report the campaign ROI. CRM has evolved from being a passive data repository and status tracking system to a tactical and strategic decision support system that provides more than just a 360 degree view of the customer, which is not limited to just tracking opportunities, managing account and contacts, and capturing call notes. CRM can be one of the key applications for an active marketing and revenue performance management that can help relationship building with customer by personalized communications and behavioral tracking, enable automation of marketing programs, measure marketing performance and ROI, and connect the sales and marketing teams to let them function as one accountable revenue team. We will now explore the stages involved in the progression of a lead to customer using a lead funnel. Lead scoring and conversion The sales and marketing teams together come up with a methodology for lead scoring to determine if the lead is sales ready. Scoring can be a manual or automated process that takes into consideration the interest shown by the lead in your product to assign points to a prospect and ranks them as cold, warm, and hot.  When the prospect rank reaches an agreed threshold, it is considered to be qualified and is assigned to the sales team after acceptance by sales. The process of lead scoring can vary from company to company, but some of the general criteria used for scoring are the demography, expense budget, company size, industry, role and designation of the lead contact, and profile completeness. In addition, scoring also take into consideration various behavioral characteristics to measure the frequency and quality of engagement, such as the response to e-mail and contacts, number of visits to website, the pages visited, app downloads, and following on social media. Lead scoring is a critical process that helps align the sales and the marketing teams within the organization by passing quality leads to the sales team and making the sales effective. Summary In this article, we saw the present day marketing and common steps involved in marketing with Microsoft Dynamics CRM 2013, such as targeting, automation and execution, close looping and revenue management by collaboration. Resources for Article: Further resources on this subject: Microsoft Dynamics CRM 2011 Overview [Article] Introduction to Reporting in Microsoft Dynamics CRM [Article] Overview of Microsoft Dynamics CRM 2011 [Article]
Read more
  • 0
  • 0
  • 1529

article-image-analyzing-complex-dataset
Packt
16 Apr 2014
6 min read
Save for later

Analyzing a Complex Dataset

Packt
16 Apr 2014
6 min read
(For more resources related to this topic, see here.) We may need to analyze volumes of data that are too large for a simple spreadsheet. A document with more than a few hundred rows can rapidly become bewildering. Handling thousands of rows can be very challenging, indeed. How can we do this more sophisticated analysis? The answer for many such analytic problems is Python. It handles larger sets of data with ease. We can very easily write sophisticated sorting, filtering and calculating rules. We're not limited by the row-and-column structure, either. We'll look at some data that—on the surface—is boring, prosaic budget information. We've chosen budgets because they're relatively simple to understand with a lot of problem domain background. Specifically, we'll download the budget for the city of Chicago, Illinois. Why Chicago? Chicago has made their data available to the general public, so we can use this real-world data for an example that's not contrived or simplified. A number of other municipalities have also made similar data available. How do we proceed? Clearly, step one is to get the data so we can see what we're working with. For more information, we can start with this URL: https://data.cityofchicago.org/ The city's data portal offers a number of formats: CSV, JSON, PDF, RDF, RSS, XLS, XLSX, and XML. Of these, the JSON is perhaps the easiest to work with. We can acquire the appropriation information with the following URL: https://data.cityofchicago.org/api/views/v9er-fp6q/rows.json?accessType=DOWNLOAD This will yield a JSON-format document. We can gather this data and cache it locally with a small Python script. import urllib.request budget_url= "https://data.cityofchicago.org/api/views/v9er-fp6q/rows.json?accessType=DOWNLOAD" with open( "budget_appropriations.json", "w") as target: with urllib.request.urlopen(budget_url) as document: target.write( document.read() ) We can use a similar script to gather the salary data that goes with the budget appropriation. The salary information will use this URL: https://data.cityofchicago.org/api/views/etzw-ycze/rows.json?accessType=DOWNLOAD Clearly, we can create a more general script to download from these two slightly different URL's to gather two very different JSON files. We'll focus on the appropriations for this article because the data organization turns out to be simpler. We could download this data manually using a browser. It rapidly becomes difficult to automate data collection and analysis when we introduce a manual step involving a person pointing and clicking using a browser. A similar comment applies to trying to use a spreadsheet to analyze the data: merely putting it on a computer doesn't really automate or formalize a process. The results of step one, then, are two files: budget_appropriations.json and budget_salaries.json. Parsing the JSON document Since the data is encoded in JSON, we can simply open the files in our development environment to see what they look like. Informally, it appears that two data sets have some common columns and some distinct columns. We'll need to create a more useful side-by-side comparison of the two files. We'll import the JSON module. We almost always want to pretty-print during exploration, so we'll import the pprint() function just in case we need it. Here are the first two imports: import json from pprint import pprint One thing we may have noted when looking at the JSON is that are two important-looking keys: a 'meta' key and a 'data' key. The 'meta' key is associated with a sequence of column definitions. The 'data' object is associated with a sequence of rows of actual data. We can use a quick script like the following to discover the details of the metadata: def column_extract( ): for filename in "budget_appropriations.json", "budget_salaries.json": with open(filename) as source: print( filename ) dataset= json.load( source ) for col in dataset['meta']['view']['columns']: if col['dataTypeName'] != "meta_data": print( col['fieldName'], col['dataTypeName'] ) print() We've opened each of our source files, and loaded the JSON document into an internal mapping named 'dataset'. The metadata can be found by navigating through the dictionaries that are part of this document. The path is dataset['meta']['view']['columns']. This leads to a sequence of column definitions. For each column definition, we can print out two relevant attributes using the keys 'fieldName' and 'dataTypeName'. This will reveal items that are dimensions and items that are facts within this big pile of data. This small function can be used in a short script file to see the various columns involved. We can write a short script like this: if __name__ == "__main__": column_extract() We can see that we have columns which are any type number, text, and money. The number and text columns can be termed "dimensions", they describe the facts. The money columns are the essential facts that we'd like to analyze. Designing around a user story Now that we have the data, it makes sense to see where we're going with it. Our goal is to support queries against salaries and appropriations. Our users want to see various kinds of subtotals and some correlations between the two datasets. The terms "various kinds" reveal that the final analysis details are open-ended. It's most important for us to build a useful model of the data rather than solve some very specific problem. Once we have the data in a usable model, we can solve a number of specific problems. A good approach is to follow Star Schema or Facts and Dimensions design pattern that supports data warehouse design. We'll decompose each table into facts—measurements that have defined units—and dimensions that describe those facts. We might also call the dimensions attributes of Business Entities that are measured by the facts. The facts in a budget analysis context are almost always money. Almost everything else will be some kind of dimension: time, geography, legal organization, government service, or financial structure. In the long run, we might like to load a relational database using Python objects. This allows a variety of tools to access the data. It would lead to more complex technology stack. For example, we'd need an Object-Relational Mapping (ORM) layer in addition to a star schema layer. For now, we'll populate a pure python model. We'll show how this model can be extended to support SQLAlchemy as an ORM layer.
Read more
  • 0
  • 0
  • 1539

Packt
14 Apr 2014
4 min read
Save for later

The Fabric library – the deployment and development task manager

Packt
14 Apr 2014
4 min read
(For more resources related to this topic, see here.) Essentially, Fabric is a tool that allows the developer to execute arbitrary Python functions via the command line and also a set of functions in order to execute shell commands on remote servers via SSH. Combining these two things together offers developers a powerful way to administrate the application workflow without having to remember the series of commands that need to be executed on the command line. The library documentation can be found at http://fabric.readthedocs.org/. Installing the library in PTVS is straightforward. Like all other libraries, to insert this library into a Django project, right-click on the Python 2.7 node in Python Environments of the Solution Explorer window. Then, select the Install Python Package entry. The Python environment contextual menu Clicking on it brings up the Install Python Package modal window as shown in the following screenshot: It's important to use easy_install to download from the Python package index. This will bring the precompiled versions of the library into the system instead of the plain Python C libraries that have to be compiled on the system. Once the package is installed in the system, you can start creating tasks that can be executed outside your application from the command line. First, create a configuration file, fabfile.py, for Fabric. This file contains the tasks that Fabric will execute. The previous screenshot shows a really simple task: it prints out the string hello world once it's executed. You can execute it from the command prompt by using the Fabric command fab, as shown in the following screenshot: Now that you know that the system is working fine, you can move on to the juicy part where you can make some tasks that interact with a remote server through ssh. Create a task that connects to a remote machine and find out the type of OS that runs on it. The env object provides a way to add credentials to Fabric in a programmatic way We have defined a Python function, host_type, that runs a POSIX command, uname–s, on the remote. We also set up a couple of variables to tell Fabric which is the remote machine we are connecting to, i.e. env.hosts, and the password that has to be used to access that machine, i.e. env.password. It's never a good idea to put plain passwords into the source code, as is shown in the preceding screenshot example. Now, we can execute the host_typetask in the command line as follows: The Fabric library connects to the remote machine with the information provided and executes the command on the server. Then, it brings back the result of the command itself in the output part of the response. We can also create tasks that accept parameters from the command line. Create a task that echoes a message on the remote machine, starting with a parameter as shown in the following screenshot: The following are two examples of how the task can be executed: We can also create a helper function that executes an arbitrary command on the remote machine as follows: def execute(cmd): run(cmd) We are also able to upload a file into the remote server by using put: The first argument of put is the local file you want to upload and the second one is the destination folder's filename. Let's see what happens: Deploying process with Fabric The possibilities of using Fabric are really endless, since the tasks can be written in plain Python language. This provides the opportunity to automate many operations and focus more on the development instead of focusing on how to deploy your code to servers to maintain them. Summary This article provided you with an in-depth look at remote task management and schema migrations using the third-party Python library Fabric. Resources for Article: Further resources on this subject: Through the Web Theming using Python [Article] Web Scraping with Python [Article] Python Data Persistence using MySQL [Article]
Read more
  • 0
  • 0
  • 1107
Banner background image

article-image-advanced-soql-statements
Packt
10 Apr 2014
4 min read
Save for later

Advanced SOQL Statements

Packt
10 Apr 2014
4 min read
(For more resources related to this topic, see here.) Relationship queries Relationship queries are mainly used to query the records from one or more objects in a single SOQL statement in Salesforce.com. We cannot query the records from more than one object without having a relationship between the objects. Filtering multiselect picklist values The INCLUDES and EXCLUDES operators are used to filter the multiselect picklist field. The multiselect picklist field in Salesforce allows the user to select more than one value from the list of values provided. Sorting in both the ascending and descending orders Sometimes, we may get a chance to sort the records when we fetch these using the SOQL statements based on two fields, one field in the ascending order and another field in the descending order. The following sample query will help us to achieve this easily: SELECT Name, Industry FROM Account ORDER By Name ASC, Industry DESC Using the preceding SOQL query, the accounts will first be sorted by Name in the ascending order and then by Industry in the descending order. The following screenshot shows the output of the SOQL execution: First, the records are arranged in the ascending order of the account's Name, and then it is sorted by Industry in the descending order. Using the GROUP BY ROLLUP clause The GROUP BY ROLLUP clause is used to add subtotals for aggregated data in query results. A query with a GROUP BY ROLLUP clause returns the same aggregated data as an equivalent query with a GROUP BY clause. It also returns multiple levels of subtotal rows. You can include up to three fields in a comma-separated list in a GROUP BY ROLLUP clause. Using the FOR REFERENCE clause The FOR REFERENCE clause is used to find the date/time when a record has been referenced. The LastReferencedDate field is updated for any retrieved records. The FOR REFERENCE clause is used to track the date/time when a record has been referenced last while executing a SOQL query. Using the FOR VIEW clause The FOR VIEW clause is used to find the date when a record has been last viewed. The LastViewedDate field is updated for any retrieved records. The FOR VIEW clause is used to track the date when the record was viewed last while executing a SOQL query. Using the GROUP BY CUBE clause The GROUP BY CUBE clause is used to add subtotals for every possible combination of the grouped field in the query results. The GROUP BY CUBE clause can be used with aggregate functions such as SUM() and COUNT(fieldName). A SOQL query with a GROUP BY CUBE clause retrieves the same aggregated records as an equivalent query with a GROUP BY clause. It also retrieves additional subtotal rows for each combination of fields specified in the comma-separated grouping list as well as the grand total. Using the OFFSET clause The OFFSET clause is used to specify the starting row number from which the records will be fetched. The OFFSET clause will be very useful when we implement pagination in the Visualforce page. The OFFSET clause along with Limits very useful in retrieving a subset of the records. The OFFSET usage in SOQL has many limitations and restrictions. Summary In this article, we saw how to query the records from more than one object using the relationship queries. The steps to get the relationship name among objects were also provided. Querying the records using both standard relationship and custom relationship was also discussed. Resources for Article: Further resources on this subject: Learning to Fly with Force.com [Article] Working with Home Page Components and Custom Links [Article] Salesforce CRM Functions [Article]
Read more
  • 0
  • 0
  • 2701

article-image-quick-start-guide-scratch-20
Packt
10 Apr 2014
6 min read
Save for later

A Quick Start Guide to Scratch 2.0

Packt
10 Apr 2014
6 min read
(For more resources related to this topic, see here.) The anticipation of learning a new programming language can sometimes leave us frozen on the starting line, not knowing what to expect or where to start. Together, we'll take our first steps into programming with Scratch, and block-by-block, we'll create our first animation. Our work in this article will focus on getting ourselves comfortable with some fundamental concepts before we create projects in the rest of the book. Joining the Scratch community If you're planning to work with the online project editor on the Scratch website, I highly recommend you set up an account on scratch.mit.edu so that you can save your projects. If you're going to be working with the offline editor, then there is no need to create an account on the Scratch website to save your work; however, you will be required to create an account to share a project or participate in the community forums. Let's take a moment to set up an account and point out some features of the main account. That way, you can decide if creating an online account is right for you or your children at this time. Time for action – creating an account on the Scratch website Let's walk through the account creation process, so we can see what information is generally required to create a Scratch account. Open a web browser and go to http://scratch.mit.edu, and click on the link titled Join Scratch. At the time of writing this book, you will be prompted to pick a username and a password, as shown in the following screenshot. Select a username and password. If the name is taken, you'll be prompted to enter a new username. Make sure you don't use your real name. This is shown in the following screenshot: After you enter a username and password, click on Next. Then, you'll be prompted for some general demographic information, including the date of birth, gender, country, and e-mail address, as shown in the following screenshot. All fields need to be filled in. After entering all the information, click on Next. The account is now created, and you receive a confirmation screen as shown in the following screenshot: Click on the OK Let's Go! button to log in to Scratch and go to your home page. What just happened? Creating an account on the Scratch website generally does not require a lot of detailed information. The Scratch team has made an effort to maximize privacy. They strongly discourage the use of real names in user names, and for children, this is probably a wise decision. The birthday information is not publicized and is used as an account verification step while resetting passwords. The e-mail address is also not publicized and is used to reset passwords. The country and gender information is also not publically displayed and is generally just used by Scratch to identify the users of Scratch. For more information on Scratch and privacy, visit: http://scratch.mit.edu/help/faq/#privacy. Time for action – understanding the key features of your account When we log in to the Scratch website, we see our home page, as shown in the following screenshot: All the projects we create online will be saved to My Stuff. You can go to this location by clicking on the folder icon with the S on it, next to the account avatar, at the top of the page. The following screenshot shows my projects: Next to the My Stuff icon in the navigation pane is Messages, which is represented by a letter icon. This is where you'll find notifications of comments and activity on your shared projects. Clicking on this icon displays a list of messages. The next primary community feature available to the subscribed users is the Discuss page. The Discuss page shows a list of forums and topics that can be viewed by anyone; however, an account is required to be able to post on the forums or topics. What just happened? A Scratch account provides users with four primary features when they view the website: saving projects, sharing projects, receiving notifications, and participating in community discussions. When we view our saved projects in the My Stuff page, as we can see in the previous screenshot, we have the ability to See inside the project to edit it, share it, or delete it. Abiding by the terms of use It's important that we take a few moments to read the terms of use policy so that we know what the community expects from us. Taken directly from Scratch's terms of use, the major points are: Be respectful Offer constructive comments Share and give credit Keep your personal information private Help keep the site friendly Creating projects under Creative Commons licenses Every work published on the Scratch website is shared under the Attribution-ShareAlike license. That doesn't mean you can surf the web and use copyrighted images in your work. Rather, the Creative Commons licensing ensures the collaboration objective of Scratch by making it easy for anyone to build upon what you do. When you look inside an existing project and begin to change it, the project keeps a remix tree, crediting the original sources of the work. A shout out to the original author in your projects would also be a nice way to give credit. For more information about the Creative Commons Attribution-ShareAlike license, visit http://creativecommons.org/licenses/by-sa/3.0/. Closely related to the licensing of Scratch projects is the understanding that you as a web user can not inherently browse the web, find media files, incorporate them into your project, and then share the project for everyone. Respect the copyrights of other people. To this end, the Scratch team enforces the Digital Millennium Copyright Act (DMCA), which protects the intellectual rights and copyrights of others. More information on this is available at http://scratch.mit.edu/DMCA. Finding free media online As we'll see throughout the book, Scratch provides libraries of media, including sounds and images that are freely available for use in our Scratch projects. However, we may find instances where we want to incorporate a broader range of media into our projects. A great search page to find free media files is http://search.creativecommons.org. Taking our first steps in Scratch From this point forward, we're going to be project editor agnostic, meaning you may choose to use the online project editor or the offline editor to work through the projects. When we encounter software that's unfamiliar to us, it's common to wonder, "Where do I begin?". The Scratch interface looks friendly enough, but the blank page can be a daunting thing to overcome. The rest of this article will be spent on building some introductory projects to get us comfortable with the project editor. If you're not already on the Scratch site, go to http://scratch.mit.edu and let's get started.
Read more
  • 0
  • 0
  • 1736

article-image-important-features-gitolite
Packt
08 Apr 2014
6 min read
Save for later

Important Features of Gitolite

Packt
08 Apr 2014
6 min read
(For more resources related to this topic, see here.) Access Control example with Gitolite We will see how simple Access Control can be with Gitolite. First, here's an example where the junior developers (let's call them Alice and Bob here) should be prevented from rewinding or deleting any branches, while the senior developers (Carol and David) are allowed to do so: Gitolite uses a plain text file to specify the configuration, and these access rules are placed in that file. repo foo   RW    =  alice bob   RW+   =  carol david You probably guessed that the RW stands for read and write. The + in the second rule stands for force, just as it does in the push command, and allows you to rewind or delete a branch. Now, suppose we want the junior developers to have some specific set of branches that they should be allowed to rewind or delete, a sort of "sandbox", if you will. The following command will help you to implement that: RW+  sandbox/  =  alice bob Alice and Bob can now push, rewind, or delete any branches whose names start with sandbox/. Access Control at the repository level is even easier, and you may even have guessed what that looks like: repo foo     RW+     =   alice     R       =   bob repo bar     RW+     =   bob     R       =   alice repo baz     RW+     =   carol     R       =   alice bob As you can see, you have three users with different access permissions for each of the three repositories. Doing this using the file systems' permissions mechanisms or POSIX ACLs would be doable, but quite cumbersome to set up and to audit/review. Sampling of Gitolite's power features The access control examples show the most commonly used feature of Gitolite, the repository and branch level access control, but of course Gitolite has many more features. In this article, we will briefly look at a few of them. Creating groups Gitolite allows you to create groups of users or repositories for convenience. Think back to Alice and Bob, our junior developers. Let's say you had several rules that Alice and Bob needed to be mentioned in. Clearly, this is too cumbersome; every time a new developer joined the team, you'd have to change all the rules to add him or her. Gitolite lets you do this by using the following command: @junior-devs    =  alice bob Later, it lets you do this by using the following command: repo foo   RW                       =  @junior-devs   RW+                      =  carol david   RW+  sandbox/            =  @junior-devs This allows you to add the junior developer in just one place at the top of the configuration file instead of potentially several places all over. More importantly, from the administrator's point of view, it serves as excellent documentation for the rules themselves; isn't it easier to reason about the rules when a descriptive group name is used rather than actual usernames? Personal branches Gitolite allows the administrator to give each developer a unique set of branches, called personal branches, that only he or she can create, push, or delete. This is a very convenient way to allow quick backups of work-in-progress branches, or share code for preliminary review. We saw how the sandbox area was defined:   RW+  sandbox/  =  alice bob However, this does nothing to prevent one junior developer from accidentally wiping out another's branches. For example, Alice could delete a branch called sandbox/bob/work that Bob may have pushed. You can use the special word USER as a directory name to solve this problem:   RW+  sandbox/USER/  =  alice bob This works as if you had specified each user individually, like this:   RW+  sandbox/alice/   =  alice   RW+  sandbox/bob/     =  bob Now, the set of branches that Alice is allowed to push is limited to those starting with sandbox/alice/, and she can no longer push or delete a branch called, say, sandbox/bob/work. Personal repositories With Gitolite, the administrator can choose to let the user create their own repositories, in addition to the ones that the administrator themselves creates. For this example, ignore the syntax and just focus on the functionality: repo dev/CREATOR/[a-z].*   C       =  @staff   RW+     =  CREATOR This allows members of the @staff group to create repositories whose names match the pattern supplied, which just means dev/<username>/<anything starting with a lowercase alphabetic character>. For example, a user called alice will be able to create repositories such as dev/alice/foo and dev/alice/bar. Gitolite and the Git control flow Conceptually, Gitolite is a very simple program. To see how it controls access to a Git repository, let us first look at how control flows from the client to the server in a normal git operation (say git fetch) when using plain ssh : When the user executes a git clone, fetch, or push, the Git client invokes ssh, passing it a command (either git-upload-pack or git-receive-pack, depending on whether the user is reading or writing). The local ssh client passes this to the server, and assuming authentication succeeds, that command gets executed on the server. With Gitolite installed, the ssh daemon does not invoke the git-upload-pack or git-receive-pack directly. Instead, it calls a program called gitolite-shell, which changes the control flow as follows: First, notice that nothing changes on the Git client side in anyway; the changes are only on the server side. In fact, unless an access violation happens and an error message needs to be sent to the user, the user may not even know that Gitolite is installed! Second, notice the red link from Gitolite's shell program to the git-upload-pack program. This call does not happen if Gitolite determines that the user does not have the appropriate access to the repo concerned. This access check happens for both read (that is, git fetch and git clone commands) and write (git push) operations; although for writes, there are more checks that happen later. Summary In this article, we learned about Access control with Gitolite. We also went through sampling of Gitolite's power features. We also covered the Git control flow. Resources for Article: Further resources on this subject: Parallel Dimensions – Branching with Git [Article] Using Gerrit with GitHub [Article] Issues and Wikis in GitLab [Article]
Read more
  • 0
  • 0
  • 1300
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-installing-activiti
Packt
24 Mar 2014
4 min read
Save for later

Installing Activiti

Packt
24 Mar 2014
4 min read
(For more resources related to this topic, see here.) Getting started with Activiti BPM Let's take a quick tour of the Activiti components so you can get an idea of what the core modules are in the Activiti BPM that make it a lightweight and solid framework. You can refer to the following figure for an overview of the Activiti modules: In this figure, you can see that Activiti is divided into various modules. Activiti Modeler, Activiti Designer, and Activiti Kickstart are part of Modelling, and they are used to design your business process. Activiti Engine can be integrated with your application, and is placed at its center as a part of Runtime. To the right of Runtime, there are Activiti Explorer and Activiti Rest, which are part of Management and used in handling business processes. Let's see each component briefly to get an idea about it. Activiti Engine The Activiti Engine is a framework that is responsible for deploying the process definitions, starting the business process instance, and executing the tasks. The following are the important features of the Activiti Engine: Performs various tasks of a process engine Runs a BPMN 2 standard process It can be configured with JTA and Spring Easy to integrate with other technology Rock-solid engine Execution is very fast Easy to query history information Provides support for asynchronous execution It can be built with cloud for scalability Ability to test the process execution Provides support for event listeners, which can be used to add custom logic to the business process Using Activiti Engine APIs or the REST API, you can configure a process engine Workflow execution using services You can interact with Activiti using various available services. With the help of process engine services, you can interact with workflows using the available APIs. Objects of process engines and services are threadsafe, so you can place a reference to one of them to represent a whole server. In the preceding figure, you can see that the Process Engine is at the central point and can be instantiated using ProcessEngineConfiguration. The Process Engine provides the following services: Repository Service: This service is responsible for storing and retrieving our business process from the repository Runtime Service: Using this service, we can start our business process and fetch information about a process that is in execution Task Service: This service specifies the operations needed to manage human (standalone) tasks, such as the claiming, completing, and assigning of tasks Identity Service: This service is useful for managing users, groups, and the relationships between them Management Service: This service exposes engine, admin, and maintenance operations, which have no relation to the runtime execution of business processes History Service: This service provides services for getting information about ongoing and past process instances Form Service: This service provides access to form data and renders forms for starting new process instances and completing tasks Activiti Modeler The Activiti Modeler is an open source modeling tool provided by the KIS BPM process solution. Using the Activiti Modeler, you can manage your Activity Server and the deployments of business processes. It's a web-based tool for managing your Activiti projects. It also provides a web form editor, which helps you to design forms, make changes, and design business processes easily. Activiti Designer The Activiti Designer is used to add technical details to an imported business process model or the process created using the Activiti Modeler, which is only used to design business process workflows. The Activiti Designer can be used to graphically model, test, and deploy BPMN 2.0 processes. It also provides a feature to design processes, just as the Activiti Modeler does. It is mainly used by developers to add technical detail to business processes. The Activiti Designer is an IDE that can only be integrated with the Eclipse plugin. Activiti Explorer The Activiti Explorer is a web-based application that can be easily accessed by a non-technical person who can then run that business process. Apart from running the business process, it also provides an interface for process-instance management, task management, and user management, and also allows you to deploy business processes and to generate reports based on historical data. Activiti REST The Activiti REST provides a REST API to access the Activiti Engine. To access the Activiti REST API, we need to deploy activiti-rest.war to a servlet container, such as Apache Tomcat. You can configure Activiti in your own web application using the Activiti REST API. It uses the JSON format and is built upon Restlet. Activiti also provides a Java API. If you don't want to use the REST API, you can use the Java API.
Read more
  • 0
  • 0
  • 2996

Packt
24 Mar 2014
5 min read
Save for later

Build a Chat Application using the Java API for WebSocket

Packt
24 Mar 2014
5 min read
Traditionally, web applications have been developed using the request/response model followed by the HTTP protocol. In this model, the request is always initiated by the client and then the server returns a response back to the client. There has never been any way for the server to send data to the client independently (without having to wait for a request from the browser) until now. The WebSocket protocol allows full-duplex, two-way communication between the client (browser) and the server. Java EE 7 introduces the Java API for WebSocket, which allows us to develop WebSocket endpoints in Java. The Java API for WebSocket is a brand-new technology in the Java EE Standard. A socket is a two-way pipe that stays alive longer than a single request. Applied to an HTML5-compliant browser, this would allow for continuous communication to or from a web server without the need to load a new page (similar to AJAX). Developing a WebSocket Server Endpoint A WebSocket server endpoint is a Java class deployed to the application server that handles WebSocket requests. There are two ways in which we can implement a WebSocket server endpoint via the Java API for WebSocket: either by developing an endpoint programmatically, in which case we need to extend the javax.websocket.Endpoint class, or by decorating Plain Old Java Objects (POJOs) with WebSocket-specific annotations. The two approaches are very similar; therefore, we will be discussing only the annotation approach in detail and briefly explaining the second approach, that is, developing WebSocket server endpoints programmatically, later in this section. In this article, we will develop a simple web-based chat application, taking full advantage of the Java API for WebSocket. Developing an annotated WebSocket server endpoint The following Java class code illustrates how to develop a WebSocket server endpoint by annotating a Java class: package net.ensode.glassfishbook.websocketchat.serverendpoint; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.websocket.OnClose; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; @ServerEndpoint("/websocketchat") public class WebSocketChatEndpoint { private static final Logger LOG = Logger.getLogger(WebSocketChatEndpoint.class.getName()); @OnOpen public void connectionOpened() { LOG.log(Level.INFO, "connection opened"); } @OnMessage public synchronized void processMessage(Session session, String message) { LOG.log(Level.INFO, "received message: {0}", message); try { for (Session sess : session.getOpenSessions()) { if (sess.isOpen()) { sess.getBasicRemote().sendText(message); } } } catch (IOException ioe) { LOG.log(Level.SEVERE, ioe.getMessage()); } } @OnClose public void connectionClosed() { LOG.log(Level.INFO, "connection closed"); } } The class-level @ServerEndpoint annotation indicates that the class is a WebSocket server endpoint. The URI (Uniform Resource Identifier) of the server endpoint is the value specified within the parentheses following the annotation (which is "/websocketchat" in this example)—WebSocket clients will use this URI to communicate with our endpoint. The @OnOpen annotation is used to decorate a method that needs to be executed whenever a WebSocket connection is opened by any of the clients. In our example, we are simply sending some output to the server log, but of course, any valid server-side Java code can be placed here. Any method annotated with the @OnMessage annotation will be invoked whenever our server endpoint receives a message from a client. Since we are developing a chat application, our code simply broadcasts the message it receives to all connected clients. In our example, the processMessage() method is annotated with @OnMessage, and takes two parameters: an instance of a class implementing the javax.websocket.Session interface and a String parameter containing the message that was received. Since we are developing a chat application, our WebSocket server endpoint simply broadcasts the received message to all connected clients. The getOpenSessions() method of the Session interface returns a set of session objects representing all open sessions. We iterate through this set to broadcast the received message to all connected clients by invoking the getBasicRemote() method on each session instance and then invoking the sendText() method on the resulting RemoteEndpoint.Basic implementation returned by calling the previous method. The getOpenSessions() method on the Session interface returns all the open sessions at the time it was invoked. It is possible for one or more of the sessions to have closed after the method was invoked; therefore, it is recommended to invoke the isOpen() method on a Session implementation before attempting to return data back to the client. An exception may be thrown if we attempt to access a closed session. Finally, we need to decorate a method with the @OnClose annotation in case we need to handle the event when a client disconnects from the server endpoint. In our example, we simply log a message into the server log. There is one additional annotation that we didn't use in our example—the @OnError annotation; it is used to decorate a method that needs to be invoked in case there's an error while sending or receiving data to or from the client. As we can see, developing an annotated WebSocket server endpoint is straightforward. We simply need to add a few annotations, and the application server will invoke our annotated methods as necessary. If we wish to develop a WebSocket server endpoint programmatically, we need to write a Java class that extends javax.websocket.Endpoint. This class has the onOpen(), onClose(), and onError() methods that are called at appropriate times during the endpoint's life cycle. There is no method equivalent to the @OnMessage annotation to handle incoming messages from clients. The addMessageHandler() method needs to be invoked in the session, passing an instance of a class implementing the javax.websocket.MessageHandler interface (or one of its subinterfaces) as its sole parameter. In general, it is easier and more straightforward to develop annotated WebSocket endpoints compared to their programmatic counterparts. Therefore, we recommend that you use the annotated approach whenever possible.
Read more
  • 0
  • 0
  • 5376

article-image-presenting-data-using-adf-faces
Packt
20 Mar 2014
7 min read
Save for later

Presenting Data Using ADF Faces

Packt
20 Mar 2014
7 min read
(For more resources related to this topic, see here.) In this article, you will learn how to present a single record, multiple records, and master-details records on your page using different components and methodologies. You will also learn how to enable the internationalizing and localizing processes in your application by using a resource bundle and the different options of bundle you can have. Starting from this article onward, we will not use the HR schema. We will rather use the FacerHR schema in the Git repository under the BookDatabaseSchema folder and read the README.txt file for information on how to create the database schema. This schema will be used for the whole book, so you need to do this only once. Make sure you validate your database connection information for your recipes to work without problem. Presenting single records on your page In this recipe, we will address the need for presenting a single record in a page, which is useful specifically when you want to focus on a specific record in the table of your database; for example, a user's profile can be represented by a single record in an employee's table. The application and its model have been created for you; you can see it by cloning the PresentingSingleRecord application from the Git repository. How to do it... In order to present a single record in pages, follow the ensuing steps: Open the PresentingSingleRecord application. Create a bounded task flow by right-clicking on ViewController and navigating to New | ADF Task Flow. Name the task flow single-employee-info and uncheck the Create with Page Fragments option. You can create a task flow with a page fragment, but you will need a page to host it at the end; alternatively, you can create a whole page if the task flow holds only one activity and is not reusable. However, in this case, I prefer to create a page-based task flow for fast deployment cycles and train you to always start from task flow. Add a View activity inside of the task flow and name it singleEmployee. Double-click on the newly created activity to create the page; this page will be based on the Oracle Three Column layout. Close the dialog by pressing the OK button. Navigate to Data Controls pane | HrAppModuleDataControl, drag-and-drop EmployeesView1 into the white area of the page template, and select ADF Form from the drop-down list that appears as you drop the view object. Check the Row Navigation option so that it has the first, previous, next, and last buttons for navigating through the task. Group attributes based on their category, so the Personal Information group should include the EmployeeId, FirstName, LastName, Email, and Phone Number attributes; the Job Information group should include HireDate, Job, Salary, and CommissionPct; and the last group will be Department Information that includes both ManagerId and DepartmentId attributes. Select multiple components by holding the Ctrl key and click on the Group button at the top-right corner, as shown in the following screenshot: Change the Display Label values of the three groups to eInfo, jInfo, and dInfo respectively. The Display Label option is a little misleading when it comes to groups in a form as groups don't have titles. Due to this, Display Label will be assigned to the Id attribute of the af:group component that will wrap the components, which can't have space and should be reasonably small; however, Input Text w/Label or Output Text w/Label will end up in the Label attribute in the panelLabelAndMessage component. Change the Component to Use option of all attributes from ADF Input Text w/Label to ADF Output Text w/Label. You might think that if you check the Read-Only Form option, it will have the same effect, but it won't. What will happen is that the readOnly attribute of the input text will change to true, which will make the input text non-updateable; however, it won't change the component type. Change the Display Label option for the attributes to have more human-readable labels to the end user; you should end up with the following screen: Finish by pressing the OK button. You can save yourself the trouble of editing the Display Label option every time you create a component that is based on a view object by changing the Label attribute in UI Hints from the entity object or view object. More information can be found in the documentation at http://docs.oracle.com/middleware/1212/adf/ADFFD/bcentities.htm#sm0140. Examine the page structure from the Structure pane in the bottom-left corner as shown in the following screenshot. A panel form layout can be found inside the center facet of the page template. This panel form layout represents an ADF form, and inside of it, there are three group components; each group has a panel label and message for each field of the view object. At the bottom of the panel form layout, you can locate a footer facet; expand it to see a panel group layout that has all the navigation buttons. The footer facet identifies the locations of the buttons, which will be at the bottom of this panel form layout even if some components appear inside the page markup after this facet. Examine the panel form layout properties by clicking on the Properties pane, which is usually located in the bottom-right corner. It allows you to change attributes such as Max Columns, Rows, Field Width, or Label Width. Change these attributes to change the form and to have more than one column. If you can't see the Structure or Properties pane, you can see them again by navigating to Window menu | Structure or Window menu | Properties. Save everything and run the page, placing it inside the adf-config task flow; to see this in action, refer to the following screenshot: How it works... The best component to represent a single record is a panel form layout, which presents the user with an organized form layout for different input/output components. If you examine the page source code, you can see an expression like #{bindings.FirstName.inputValue}, which is related to the FirstName binding inside the Bindings section of the page definition where it points to EmployeesView1Iterator. However, iterator means multiple records, then why FirstName is only presenting a single record? It's because the iterator is aware of the current row that represents the row in focus, and this row will always point to the first row of the view object's select statement when you render the page. By pressing different buttons on the form, the Current Row value changes and thus the point of focus changes to reflect a different row based on the button you pressed. When you are dealing with a single record, you can show it as the input text or any of the user input's components; alternatively, you can change it as the output text if you are just viewing it. In this recipe, you can see that the Group component is represented as a line in the user interface when you run the page. If you were to change the panel form layout's attributes, such as Max Columns or Rows, you would see a different view. Max Columns represents the maximum number of columns to show in a form, which defaults to 3 in case of desktops and 2 in case of PDAs; however, if this panel form layout is inside another panel form layout, the Max Columns value will always be 1. The Rows attribute represents the numbers of rows after which we should start a new column; it has a default value of 231-1. You can know more about each attribute by clicking on the gear icon that appears when you hover over an attribute and reading the information on the property's Help page. The benefit of having a panel form layout is that all labels are aligned properly; this organizes everything for you similar to the HTML table component. See also Check the following reference for more information about arranging content in forms: http://docs.oracle.com/middleware/1212/adf/ADFUI/af_orgpage.htm#CDEHDJEA
Read more
  • 0
  • 0
  • 1955

article-image-components-primefaces-extensions
Packt
20 Mar 2014
6 min read
Save for later

Components of PrimeFaces Extensions

Packt
20 Mar 2014
6 min read
(For more resources related to this topic, see here.) The commonly used input components and their features The PrimeFaces Extensions team created some basic form components that are frequently used in registration forms. These frequently used components are the InputNumber component that formats the input fields with numeric strings, the KeyFilter component for filtering the keyboard input whereas select components such as TriStateCheckbox and TriStateCheckboxMany are used for adding a new state to the select Boolean checkbox and Many checkbox components in an order. Understanding the InputNumber component The InputNumber component can be used to format the input form fields with custom number strings. The main features of this component include support for currency symbols, min and max values, negative numbers, and many more rounding methods. The component development is based on the autoNumeric jQuery plugin. The InputNumber component features are basically categorized into two main sections: Common usage Validations, conversions, and rounding methods Common usage The InputNumber use case is used for basic common operations such as appending currency symbols on either side of the number (that is, prefix and suffix notations), custom decimal and thousand separators, minimum and maximum values, and custom decimal places. The following XHTML code is used to create InputNumber with all possible custom options in the form of attributes: <pe:inputNumber id="customInput" value="#{inputNumberController.value}" symbol=" $" symbolPosition="p" decimalSeparator="," thousandSeparator="." minValue="-99.99" maxValue="99.99" decimalPlaces="4" > Validations, conversions, and rounding methods The purpose of this use case is just like any other standard JSF and PrimeFaces input components where we can also apply different types of converters and validators to the InputNumber component. Apart from these regular features, you can also control the empty input display with different types of options such as empty, sign, and zero values. The InputNumber component is specific to Numeric types; rounded methods are a commonly used feature for InputNumber in web applications. You can use the roundMethod attribute of InputNumber; its default value is Round-Half-Up Symmetric. Exploring the KeyFilter component to restrict input data On a form-based screen, you need to restrict the input on specific input components based on the component's nature and functionality. Instead of approaching plain JavaScript with regularExpressions, the Extensions team provided the KeyFilter component to filter the keyboard input. It is not a standalone component and always depends on the input components by referring through the for attribute. TriStateCheckbox and TriStateManyCheckbox Both the TriStateCheckbox and TriStateManyCheckbox components provide a new state to the standard SelectBooleanCheckbox and SelectManyBooleanCheckbox components respectively. Each state is mapped to the 0, 1, and 2 string values. TriStateCheckbox can be customized using the title, custom icons for three states, and item label features. Just as with any other standard input component, you can apply the Ajax behavior listeners to this component as well. Managing events using the TimeLine component TimeLine is an interactive visualization chart for scheduling and manipulating the events in a certain period of time. The time axis scale can be auto-adjusted and ranges from milliseconds to years. The events can take place on a single date or a particular date range. The TimeLine component supports many features such as read only events, editable events, grouping events, client-side and server-side API, and drag-and-drop. Understanding the MasterDetail component and its various features The MasterDetail component allows us to group data contents into multiple levels and save the web page space for the remaining important areas of the application. The grouped data is maintained in a hierarchical manner and can be navigated through flexible built-in breadcrumbs or command components to go forward and backward in the web interface. Each level in the content flow is represented by a MasterDetailLevel component. This component will hold the PrimeFaces/JSF data iterative or form components inside the grouping components. You can also switch between levels with the help of the SelectDetailLevel handler, which is based on Ajax, and dynamically load the levels through Ajax behavior. The SelectDetailLevel handler can be attached to the ajaxified PrimeFaces components and standard JSF components. These components also support the header and footer facets. Introducing exporter components and its features The PrimeFaces Core dataExporter component works very well on the plain dataTable components along with providing some custom features. But the PrimeFaces Extensions exporter component is introduced to work on all the major features of the dataTable component, provides full control of customization, and extends its features to dataList components. The exporter component is used to extract and report the tabular form of data in different formats. This component is targeted to work with all major features of dataTable, subTable, and other data iteration components such as dataList as well. Currently, the supported file formats are PDF and Excel. Both headerText and footerText columns are supported along with the value holders. CKEditor The CKEditor is a WYSIWYG text editor that is to be used for the web pages that bring the desktop editing application features from Microsoft Word and OpenOffice to web applications. The text being edited in this editor makes the results similar to what you can see after the web page is published. The CKEditor component is available as a separate JAR file from the Extensions library; this library or dependency needs to be included on demand. The CKEditor component provides more custom controls with the custom toolbar template and skinning in user interfaces using the theme and interfaceColor properties as compared to the PrimeFaces editor component. The editor component is, by default, displayed with all the controls to make the content customizable. You can supply a few more customizations through interfaceColor to change the interface dynamically and checkDirtyInterval for repeated time interval checks after the content has been changed. To make asynchronous communication calls on the server-side code, many Ajax events are supported by this component. The following XHTML code creates a CKEditor component with custom interface colors: <pe:ckEditor id="editor" value="#{ckeditorController.content}" interfa ceColor="#{ckeditorController.color}" checkDirtyInterval="0"> <p:ajax event="save" listener="#{ckeditorController.saveListener}" update="growl"/> </pe:ckEditor> You can write any number of instances on the same page. There are two ways in which we can customize the CKEditor toolbar component: Toolbar defined with default custom controls: You can customize the editor toolbar by declaring the control names in the form of a string name or an array of strings. CustomConfig JavaScript file for user-defined controls: You can also customize the toolbar by defining the custom config JavaScript file through the customConfig attribute and register the control configuration names on the toolbar attribute. Summary In this article, we discussed some commonly used input components and their features, and introduced how to manage events using the TimeLine component. Then we covered the MasterDetail component and introduced exporter component and its features. Resources for Article: Further resources on this subject: Getting Started with PrimeFaces [Article] JSF2 composite component with PrimeFaces [Article] JSF 2.0 Features: An Extension [Article]
Read more
  • 0
  • 0
  • 2411
article-image-boost-your-search
Packt
19 Mar 2014
11 min read
Save for later

Boost Your search

Packt
19 Mar 2014
11 min read
(For more resources related to this topic, see here.) The dismax query parser Before we understand how to boost our search using the dismax query parser, we will learn what a dismax query parser is and the features that make it more demanding than the Lucene query parser. While using the Lucene query parser, a very vital problem was noticed. It restricts the query to be well formed, with certain syntax rules that have balanced quotes and parenthesis. The Lucene query parser is not sophisticated enough to understand that the end users might be laymen. Thus, these users might type anything for a query as they are unaware of such restrictions and are prone to end up with either an error or unexpected search results. To tackle such situations, the dismax query parser came into play. It has been named after Lucene's DisjunctionMaxQuery, which addresses the previously discussed issue along with incorporating a number of features that enhance search relevancy (that is, boosting or scoring). Now, let us do a comparative study of the features provided by the dismax query parser with those provided by the Lucene query parser. Here we go: Search is relevant to multiple fields that have different boost scores The query syntax is limited to the essentiality Auto-boosting of phrases out of the search query Convenient query boosting parameters, usually used with the function queries You can specify a cut-off count of words to match the query I believe you are aware of the q parameter, how the parser for user queries is set using the defType parameter, and the usage of qf, mm, and q.alt parameters. If not, I recommend that you refer to the Dismax query parser documentation at https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser. Lucene DisjunctionMaxQuery Lucene DisjunctionMaxQuery provides the capability to search across multiple fields with different boosts. Let us consider the following example wherein the query string is mohan; we may configure dismax in such a way that it acts in a very similar way to DisjunctionMaxQuery. Our Boolean query looks as follows: fieldX:mohan^2.1 OR fieldY:mohan^1.4 OR fieldZ:mohan^0.3 Due to the difference in the scoring of the preceding query, we may infer that the query is not quite equivalent to what the dismax query actually does. As far as the dismax query is concerned, in such scenarios, (in case of Boolean queries) the final score is taken as the sum for each of the clauses, whereas DisjunctionMaxQuery considers the highest score as the final one. To understand this practically, let us calculate and compare the final scores in each of the following two behaviors: Fscore_dismax = 2.1 + 1.4 + 0.3 = 3.8 Fscore_disjunctionMaxQuery = 2.1 (the highest of the three) Based on the preceding calculation, we can infer that the score produced out of the dismax query parser is always greater than that of the DisjunctionMaxQuery query parser; hence, there is better search relevancy provided that we are searching for the same keyword in multiple fields. Now, we will look into another parameter, which is known as tie, that boosts the search relevance even further. The value of the tie parameter ranges from 0 to 1, 0 being the default value. Raising this value above 0 begins to favor the documents that match multiple search keywords over those that were boosted higher. Value of the tie parameter can go up to 1, which means that the score is very close to that of the Boolean query. Practically speaking, a smaller value such as 0.1 is the best as well as an effective choice we may have. Autophrase boosting Let us assume that a user searches for Surendra Mohan. Solr interprets this as two different search keywords, and depending on how the request handler has been configured, either both the terms or just one would be found in the document. There might be a case wherein one of the matching documents Surendra is the name of an organization and they have an employee named Mohan. It is quite obvious that Solr will find this document and it might probably be of interest to the user due to the fact that it contains both the terms the user typed. It is quite likely that the document field containing the keyword Surendra Mohan typed by the user represents a closer match to the document the user is actually looking for. However, in such scenarios, it is quite difficult to predict the relative score, though it contains the relevant documents the user was looking for. To tackle such situations and improve scoring, you might be tempted to quote the user's query automatically; however, this would omit the documents that don't have adjacent words. In such a scenario, dismax can add a phrased form of the user's query onto the entered query as an optional clause. It rewrites the query as follows: Surendra Mohan This query can be rewritten as follows: +(Surendra Mohan) "Surendra Mohan" The rewritten query depicts that the entered query is mandatory by using + and shows that we have added an optional phrase. So, a document that contains the phrase Surendra Mohan not only matches that clause in the rewritten query, but also matches each of the terms individually (that is, Surendra and Mohan). Thus, in totality, we have three clauses that Solr would love to play around with. Assume that there is another document where this phrase doesn't match, but it has both the terms available individually and scattered out in there. In this case, only two of the clauses would match. As par Lucene's scoring algorithm, the coordination factor for the first document (which matched the complete phrase) would be higher, assuming that all the other factors remain the same. Configuring autophrase boosting Let me inform you, autophrase boosting is not enabled by default. In order to avail this feature, you have to use the pf parameter (phrase fields), whose syntax is very much identical to that of the qf parameter. To play around with the pf value, it is recommended that you start with the same value as that of qf and then make the necessary adjustments. There are a few reasons why we should vary the pf value instead of qf. They are as follows: The pf value helps us to use varied boost factors so that the impact caused due to phrase boosting isn't overwhelming. In order to omit fields that are always a single termed, for example, identifier, due to the fact that in such a case there is no point in searching for phrases. To omit some of the fields having numerous text count in order to retain the search performance to a major extent. Substitute a field with the other having the same data, but are analyzed differently. You may use different text analysis techniques to achieve this, for example, Shingle or Common-grams. To learn more about text analysis techniques and their usage, I would recommend you to refer to http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters. Configuring the phrase slop Before we learn how to configure the phrase slop, let us understand what it actually is. Slop stands for term proximity, and is primarily used to factorize the distance between two or more terms to a relevant calculation. As discussed earlier in this section, if the two terms Surendra and Mohan are adjacent to each other in a document, that document will have a better score for the search keyword Surendra Mohan compared to the document that contains the terms Surendra and Mohan spread individually throughout the document. On the other hand, when used in conjunction with the OR operator, the relevancy of documents returned in the search results are likely to be improved. The following example shows the syntax of using slop, which is a phrase (in double quotes) followed by a tilde (~) and a number: "Surendra Mohan"~1 Dismax allows two parameters to be added so that slop can be automatically set; qs for any input phrase queries entered by the user and ps for phrase boosting. In case the slop is not specified, it means there is no slop and its value remains 0. The following is the sample configuration setting for slop : <str name="qs" >1</str> <str name="ps">0</str> Boosting a partial phrase You might come across a situation where you need to boost your search for consecutive word pairs or even triples out of a phrase query. To tackle such a situation, you need to use edismax, and this can be configured by setting pf2 and pf3 for word pairs and triples, respectively. The parameters pf2 and pf3 are de fi ned in a manner identical to that of the pf parameter. For instance, consider the following query: how who now cow This query becomes: +(how who now cow) "how who now cow" "how who" "who now" "now cow" "how who now" "who now cow" This feature is unaffected by the ps parameter due to the fact that it is only applicable to the entire phrase boost and has no impact on partial phrase boosting. Moreover, you may expect better relevancy for longer queries; however, the longer the query, the slower its execution. To handle this situation and make the longer queries execute faster, you need to explore and use text analysis techniques such as Shingle or Common-grams. Boost queries Apart from the other boosting techniques we discussed earlier, boost queries are another technique that impact the score of the document to a major extent. Implementing boost queries involves specifying multiple additional queries using the bq parameter or a set of parameters of the dismax query parser. Just like the autophrase boost, this parameter(s) gets added to the user's query in a very similar fashion. Let us not forget that boosting only impacts the scores of the documents that already matched the user's query in the q parameter. So, to achieve a higher score for a document, we need to make sure the document matches a bq query. To understand boost queries better and learn how to work with them, let us consider a realistic example of a music composition and a commerce product. We will primarily be concerned about the music type and the composer's fields with the field names wm_type and wm_composer, respectively. The wm_type field holds the Orchestral, Chamber, and Vocal values along with others and the wm_composer field holds the values Mohan, Webber, and so on. We don't wish to arrange the search results based on these parameters, due to the fact that we are targeting to implement the natural scoring algorithm so that the user's query can be considered relevant; on the other hand, we want the score to be impacted based on these parameters. For instance, let us assume that the music type chamber is the most relevant one, whereas vocal is the least relevant. Moreover, we assume that the composer Mohan is more relevant than Webber or others. Now, let us see how we can express this using the following boost query: <str name="bq">wm_type:Chamber^2 (*:* -wm_type:Vocal)^2 wm_ composer:Mohan^2</str> Based on the search results for any keyword entered by the user (for instance, Opera Simmy), we can infer that our boost query did its job successfully by breaking a tie score, wherein the music type and composer names are the same with varied attributes. In practical scenarios, to achieve a better and desired relevancy boost, boosting on each of the keywords (in our case, three keywords) can be tweaked by examining the debugQuery output minutely. In the preceding boost query, you must have noticed (*:* -wm_type:Vocal)^2, which actually boosts all the documents except the vocal music type. You might think of using wm_type:Vocal^0.5 instead, but let us understand that it would still add value to the score; hence, it wouldn't be able to serve our purpose. We have used *:* to instruct the parser that we would like to match all the documents. In case you don't want any document to match (that is, to achieve 0 results), simply use -*:* instead. Compared to function queries, boost queries are not much effective, primarily due to the fact that edismax supports multiplied boost, which is obviously demanding compared to addition. You might think of a painful situation wherein you want an equivalent boost for both the Chamber wm_type and Mohan wm_composer types. To tackle such situations, you need to execute the query with debugQuery enabled so as to analyze the scores of each of the terms (which is going to be different). Then, you need to use disproportionate boosts so that when multiplied by their score (resultant scores from debugQuery) ends up with the same value. Summary This article briefly described scoring and function queries. It also gave an idea about the Lucene's DisjunctionMaxQuery. Resources for Article: Further resources on this subject: Getting Started with Apache Solr [Article] Apache Solr: Analyzing your Text Data [Article] Apache Solr: Spellchecker, Statistics, and Grouping Mechanism [Article]
Read more
  • 0
  • 0
  • 1651

article-image-maximizing-everyday-debugging
Packt
14 Mar 2014
5 min read
Save for later

Maximizing everyday debugging

Packt
14 Mar 2014
5 min read
(For more resources related to this topic, see here.) Getting ready For this article, you will just need a premium version of VS2013 or you may use VS Express for Windows Desktop. Be sure to run your choice on a machine using a 64-bit edition of Windows. Note that Edit and Continue previously existed for 32-bit code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler-based features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU and select Configuration Manager…: When the Configuration Manager dialog opens, we can create a new Project Platform targeting 64-bit code. To do this, click on the drop-down menu for Platform and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to the Configuration Manager. Verify that x64 remains active under Platform and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: Now, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; Debugging Your .NET Application 156 int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use any method that you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5 or clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). If you don't see Autos, it can be made visible by pressing Ctrl + D, A or through Debug | Windows | Autos while debugging. After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet): There's more… Programmers who write C++ already have the ability to see the return values of functions; this just brings .NET developers into the fold. Your development experience won't have to suffer based on the languages chosen for your projects. The Edit and Continue functionality is also available for ASP.NET projects. New projects created in VS2013 will have Edit and Continue enabled by default. Existing projects imported to VS2013 will usually need this to be enabled if it hasn't been already. To do so, right-click on your ASP.NET project in Solution Explorer and select Properties (alternatively, it is also available via Project | <Project Name> Properties…). Navigate to the Web option and scroll to the bottom to check the Enable Edit and Continue checkbox. The following screenshot shows where this option is located on the properties page: Summary In this article, we learned how to use the Edit and Continue feature. Using this feature enables you to make changes to your project without having to immediately recompile your project. This simplifies debugging and enables a bit of exploration. You also saw how the Autos window can display the values of variables as you step through your program’s execution. Resources for Article: Further resources on this subject: Using the Data Pager Control in Visual Studio 2008 [article] Load Testing Using Visual Studio 2008: Part 1 [article] Creating a Simple Report with Visual Studio 2008 [article]
Read more
  • 0
  • 0
  • 1215

article-image-understanding-python-regex-engine
Packt
18 Feb 2014
8 min read
Save for later

Understanding the Python regex engine

Packt
18 Feb 2014
8 min read
(For more resources related to this topic, see here.) These are the most common characteristics of the algorithm: It supports "lazy quantifiers" such as *?, +?, and ??. It matches the first coincidence, even though there are longer ones in the string. >>>re.search("engineer | engineering", "engineering").group()'engineer' This also means that order is important. The algorithm tracks only one transition at one step, which means that the engine checks one character at a time. Backreferences and capturing parentheses are supported. Backtracking is the ability to remember the last successful position so that it can go back and retry if needed In the worst case, complexity is exponential O(Cn). We'll see this later in Backtracking. Backtracking Backtracking allows going back and repeating the different paths of the regular expression. It does so by remembering the last successful position, this applies to alternation and quantifiers, let’s see an example: Backtracking As we see in the image the regex engine tries to match one character at a time until it fails and starts again with the following path it can retry. The regex used in the image is the perfect example of the importance of how the regex is built, in this case the expression can be rebuild as spa (in | niard), so that the regex engine doesn’t have to go back up to the start of the string in order to retry the second alternative. This leads us to what is called catastrophic backtracking a well-known problem with backtracking that can give you several problems, ranging from slow regex to a crash with a stack overflow. In the previous example, you can see that the behavior grows not only with the input but also with the different paths in the regex, that’s why the algorithm is exponential O(Cn), with this in mind it’s easy to understand why we can end up with a stack overflow. The problem arises when the regex fails to match the String. Let’s benchmark a regex with technique we’ve seen previously, so we can understand the problem better. First let’s try a simple regex: >>> def catastrophic(n): print "Testing with %d characters" %n pat = re.compile('(a+)+c') text = "%s" %('a' * n) pat.search(text) As you can see the text we’re trying to match it’s always going to fail due to there is no c at the end. Let’s test it with different inputs. >>> for n in range(20, 30): test(catastrophic, n) Testing with 20 characters The function catastrophic lasted: 0.130457 Testing with 21 characters The function catastrophic lasted: 0.245125 …… The function catastrophic lasted: 14.828221 Testing with 28 characters The function catastrophic lasted: 29.830929 Testing with 29 characters The function catastrophic lasted: 61.110949 The behavior of this regex looks like quadratic. But why? what’s happening here? The problem is that (a+) starts greedy, so it tries to get as many a’s as possible and after that it fails to match the (a+)+, that is, it backtracks to the second a, and continue consuming a’s until it fails to match c, when it tries again (backtrack) the whole process starting with the second a. Let’s see another example, in this case with an exponential behavior: >>> def catastrophic(n): print "Testing with %d characters" %n pat = re.compile('(x+)+(b+)+c') text = 'x' * n text += 'b' * n pat.search(text) >>> for n in range(12, 18): test(catastrophic, n) Testing with 12 characters The function catastrophic lasted: 1.035162 Testing with 13 characters The function catastrophic lasted: 4.084714 Testing with 14 characters The function catastrophic lasted: 16.319145 Testing with 15 characters The function catastrophic lasted: 65.855182 Testing with 16 characters The function catastrophic lasted: 276.941307 As you can see the behavior is exponential, which can lead to a catastrophic scenarios. And finally let’s see what happen when regex has a match. >>> def non_catastrophic(n): print "Testing with %d characters" %n pat = re.compile('(x+)+(b+)+c') text = 'x' * n text += 'b' * n text += 'c' pat.search(text) >>> for n in range(12, 18): test(non_catastrophic, n) Testing with 10 characters The function catastrophic lasted: 0.000029 …… Testing with 19 characters The function catastrophic lasted: 0.000012 Optimization recommendations In the following sections we will find a number of recommendations that could be used to apply to improve regular expressions. The best tool will always be the common sense, and even following these recommendations common sense will need to be used. It has to be understood when the recommendation is applicable and when not. For instance the recommendation don’t be greedy cannot be used in the 100% of the cases. Reuse compiled patterns To use a regular expression we have to convert it from the string representation to a compiled form as RegexObject. This compilation takes some time. If instead of using the compile function we are using the rest of the methods to avoid the creation of the RegexObject, we should understand that the compilation is executed anyway and a number of compiled RegexObject are cached automatically. However, when we are compiling that cache won’t back us. Every single compile execution will consume an amount of time that perhaps could be negligible for a single execution, but it’s definitely relevant if many executions are performed. Extract common parts in alternation Alternation is always a performance risk point in regular expressions. When using them in Python, and therefore in a sort of NFA implementation, we should extract any common part outside of the alternation. For instance if we have /(Hello⇢World|Hello⇢Continent|Hello⇢Country,)/, we could easily extract Hello⇢ having the following expression /Hello⇢(World|Continent|Country)/. This will make our engine to just check Hello⇢ once, and not going back and recheck for each possibility. Shortcut the alternation Ordering in alternation is relevant; each of the different options present in the alternation will be checked one by one, from the left to the right. This can be used in favor of performance. If we place the more likely options at the beginning of the alternation, more checks will mark the alternation as matched sooner. For instance, we know that the more common colors of cars are white and black. If we are writing a regular expression accepting some colors, we should put white and black first, as those are the more likely to appear. This is: /(white|black|red|blue|green)/. For the rest of the elements, if they have the very same odds of appearing, if could be favorable to put the shortest ones before the longer ones. Use non capturing groups when appropriate Capturing groups will consume some time per each group defined in an expression. This time is not very important but is still relevant if we are executing a regular expression many times. Sometimes we are using groups but we might not be interested in the result. For instance when using alternation. If that is the case we can save some execution time to the engine by marking that group as non-capturing. This is: (?:person|company).. Be specific When the patterns we define are very specific, the engine can help us performing quick integrity checks before the actual pattern matching is executed. For instance, if we pass to the engine the expression /w{15}/ to be matched against the text hello, the engine could decide to check if the input string is actually at least 15 characters long instead of matching the expression. Don’t be greedy The quantifiers and we learnt the difference between greedy and reluctant quantifiers. We also found that the quantifiers are greedy by default. What does this mean to performance? It means that the engine will always try to catch as many characters as possible and then reducing the scope step by step until it's done. This could potentially make the regular expression slow if the match is typically short. Keep in mind, however, this is only applicable if the match is usually short. Summary In this article, we understood how to see the engine working behind the scenes. We learned some theory of the engine design and how it's easy to fall in a common pitfall—the catastrophic backtracking. Finally, we reviewed different general recommendations to improve the performance of our regular expressions. Resources for Article: Further resources on this subject: Python LDAP Applications: Part 1 - Installing and Configuring the Python-LDAP Library and Binding to an LDAP Directory [Article] Python Data Persistence using MySQL Part III: Building Python Data Structures Upon the Underlying Database Data [Article] Python Testing: Installing the Robot Framework [Article]
Read more
  • 0
  • 0
  • 4435
article-image-making-your-code-better
Packt
14 Feb 2014
8 min read
Save for later

Making Your Code Better

Packt
14 Feb 2014
8 min read
(For more resources related to this topic, see here.) Code quality analysis The fact that you can compile your code does not mean your code is good. It does not even mean it will work. There are many things that can easily break your code. A good example is an unhandled NullReferenceException. You will be able to compile your code, you will be able to run your application, but there will be a problem. ReSharper v8 comes with more than 1400 code analysis rules and more than 700 quick fixes, which allow you to fix detected problems. What is really cool is that ReSharper provides you with code inspection rules for all supported languages. This means that ReSharper not only improves your C# or VB.NET code, but also HTML, JavaScript, CSS, XAML, XML, ASP.NET, ASP.NET MVC, and TypeScript. Apart from finding possible errors, code quality analysis rules can also improve the readability of your code. ReSharper can detect code, which is unused and mark it as grayed, prompts you that maybe you should use auto properties or objects and collection initializers, or use the var keyword instead of an explicit type name. ReSharper provides you with five severity levels for rules and allows you to configure them according to your preference. Code inspection rules can be configured in the ReSharper's Options window. A sample view of code inspection rules with the list of available severity levels is shown in the following screenshot: Background analysis One of the best features in terms of code quality in ReSharper is Background analysis. This means that all the rules are checked as you are writing your code. You do not need to compile your project to see the results of the analysis. ReSharper will display appropriate messages in real time. Solution-wide inspections By default, the described rules are checked locally, which means that they should be checked in the current class. Because of this, ReSharper can mark some code as unused if it is used only locally; for example, there can be any unused private method or some part of code inside your method. These two cases are shown in the following screenshot: Additionally, for local analysis, ReSharper can check some rules in your entire project. To do this, you need to enable Solution-wide inspections. The easiest way to enable Solution-wide inspections is to double-click the circle icon in the bottom-right corner of Visual Studio, as seen in the following screenshot: With enabled Solution-wide inspections, ReSharper can mark the public methods or returned values that are unused. Please note that running Solution-wide inspections can hit Visual Studio’s performance in big projects. In such cases, it is better to disable this feature. Disabling code inspections With ReSharper v8, you can easily mark some part of your code as code that should not be checked by ReSharper. You can do this by adding the following comments: // ReSharper disable all // [your code] // ReSharper restore all All code between these two comments will be skipped by ReSharper in code inspections. Of course, instead of the all word, you can use the name of any ReSharper rule such as UseObjectOrCollectionInitializer. You can also disable ReSharper analysis for a single line with the following comment: // ReSharper disable once UseObjectOrCollectionInitializer ReSharper can generate these comments for you. If ReSharper highlights some issue, then just press Alt + Enter and select Options for “YOUR_RULE“ inspection, as shown in the following screenshot: Code Issues You can also an ad-hoc run code analysis. An ad-hoc analysis can be run on the solution or project level. To run ad-hoc analysis, just navigate to RESHARPER | Inspect | Code Issues in Solution or RESHARPER | Inspect | Code Issues in Current Project from the Visual Studio toolbar. This will display a dialog box that shows us the progress of analysis and will finally display the results in the Inspection Results window. You can filter and group the displayed issues as and when you need to. You can also quickly go to a place where the issue occurs just by double-clicking on it. A sample report is shown in the following screenshot: Eliminating errors and code smells We think you will agree that the code analysis provided by ReSharper is really cool and helps create better code. What is even cooler is that ReSharper provides you with features that can fix some issues automatically. Quick fixes Most errors and issues found by ReSharper can be fixed just by pressing Alt + Enter. This will display a list of the available solutions and let you select the best one for you. Fix in scope Quick fixes described above allow you to fix the issues in one particular place. However, sometimes there are issues that you would like to fix in every file in your project or solution. A great example is removing unused using statements or the this keyword. With ReSharper v8, you do not need to fix such issues manually. Instead, you can use a new feature called Fix in scope. You start as usual by pressing Alt + Enter but instead of just selecting some solution, you can select more options by clicking the small arrow on the right from the available options. A sample usage of the Fix in scope feature is shown in the following screenshot: This will allow you to fix the selected issue with just one click! Structural Search and Replace Even though ReSharper contains a lot of built-in analysis, it also allows you to create your own analyses. You can create your own patterns that will be used to search some structures in your code. This feature is called Structural Search and Replace (SSR). To open the Search with Pattern window, navigate to RESHARPER | Find | Search with Pattern…. A sample window is shown in the following screenshot: You can see two things here: On the left, there is place to write you pattern On the right, there is place to define placeholders In the preceding example, we were looking for if statements to compare them with some false expression. You can now simply click on the Find button and ReSharper will display every code that matches this pattern. Of course, you can also save your patterns. You can create new search patterns from the code editor. Just select some code, click on the right mouse button and select Find Similar Code….This will automatically generate the pattern for this code, which you can easily adjust to your needs. SSR allows you not only to find code based on defined patterns, but also replace it with different code. Click on the Replace button available on the top in the preceding screenshot. This will display a new section on the left called Replace pattern. There, you can write code that will be placed instead of code that matches the defined pattern. For the pattern shown, you can write the following code: if (false = $value$) { $statement$ } This will simply change the order of expressions inside the if statement. The saved patterns can also be presented as Quick fixes. Simply navigate to RESHARPER | Options | Code Inspection | Custom Patterns and set proper severity for your pattern, as shown in the following screenshot: This will allow you to define patterns in the code editor, which is shown in the following screenshot: Code Cleanup ReSharper also allows you to fix more than one issue in one run. Navigate to RESHARPER | Tools | Cleanup Code… from the Visual Studio toolbar or just press Ctrl + E, Ctrl + C. This will display the Code Cleanup window, which is shown in the following screenshot: By clicking on the Run button, ReSharper will fix all issues configured in the selected profile. By default, there are two patterns: Full Cleanup Reformat Code You can add your own pattern by clicking on the Edit Profiles button. Summary Code quality analysis is a very powerful feature in ReSharper. As we have described in this article, ReSharper not only prompts you when something is wrong or can be written better, but also allows you to quickly fix these issues. If you do not agree with all rules provided by ReSharper, you can easily configure them to meet your needs. There are many rules that will open your eyes and show you that you can write better code. With ReSharper, writing better, cleaner code is as easy as just pressing Alt + Enter. Resources for Article: Further resources on this subject: Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010 [Article] Getting Started with Code::Blocks [Article] Core .NET Recipes [Article]
Read more
  • 0
  • 0
  • 1643

article-image-diagnostic-leveraging-accelerated-poc-crm-online-service
Packt
24 Jan 2014
16 min read
Save for later

Diagnostic leveraging of the Accelerated POC with the CRM Online service

Packt
24 Jan 2014
16 min read
(For more resources related to this topic, see here.) For customers considering Dynamics CRM solutions, online or on-premises, Sure Step introduced a service called Accelerated Proof of Concept with CRM Online. This service was designed to take advantage of free trial licenses that can be afforded to prospective customers, giving them the ability to "test drive" the CRM Online solution on their own before deciding to move forward with solution envisioning and solution acquisition. Microsoft Dynamics is one of the unique solution providers in the world to give customers the same foundational code base for both on-premises and online solutions. Thus, customers may validate the functionality on CRM Online and still choose to deploy the on-premises solution if they so desire. Hence this Sure Step service is titled Accelerated POC with CRM Online as opposed to for CRM Online solutions only. The activity flow of the Accelerated POC with CRM Online service is shown in the following diagram: The conceptual design behind this service was to provide some standard scenarios to customers. Customers would then have the ability to upload their own dataset for these scenarios, giving them the ability to quickly validate the corresponding CRM functionality and get a comfort feel for the Dynamics CRM product. To start with, the Microsoft Services team developed five out-of-box Sales Force Automation (SFA) scenarios that encompassed basic workflows for the following: Lead capture Lead allocation/routing Opportunity management Quote/contract development Contract conversion The SFA scenarios were made available to the Dynamics partner ecosystem via the CRM Marketplace in the form of a Quick Start package. Besides the solution setup for the scenarios, the package also included a delivery guide, demo scripts, and sample data that could be used as a building block for a customer Proof of Concept for the corresponding scenarios. Using the previously discussed materials, solution providers could execute high-level requirements and Fit Gap reviews in the first step of this service. If one or more of the scenarios fits with the customer needs, the solution provider can conduct a preliminary business value assessment to get an initial gauge of the solution benefits as well as an architecture assessment to determine how the customer's users would access the system. Following that, the provider's team would set up the system with customer data and turn it over to the customer's assigned users, who could use the remainder of the free trial licenses to test the system. The idea of giving the customer's users a setup with their own data was to make it that much more intuitive to them in testing and evaluating a new system. The Accelerated POC concept can be leveraged for scenarios besides the five noted if the solution providers develop other predefined scenarios that they commonly encounter in customer engagements and then follow the process previously noted. The Accelerated POC scenarios can also be leveraged as starting points for customer demonstrations of the CRM solution. It also bears mention that the Accelerated POC was designed to get a quick win for the sales team with a customer by affording them hands-on experience on a limited subset of the CRM functionality. When the customer feels comfortable with that aspect of the solution, the solution provider may avail of the other Decision Accelerator Offering services previously discussed, including Requirements and Process Review, Fit Gap and Solution Blueprint, Architecture Assessment, and Scoping Assessment, to determine the scope of the full solution required for the customer. The Diagnostic phase for a current Dynamics customer We covered the Sure Step Diagnostic phase guidance for a new or prospective Dynamics customer. The Diagnostic phase also supports the process for due diligence and solution selling process for an existing Dynamics customer, which is the topic of discussion in this article. The following diagram shows the flow of activities and services of the Decision Accelerator Offering for an existing customer. The flow is very similar to the one for a prospect, with the only difference being the Upgrade Assessment DA service replacing the Requirements and Process Review DA service. Much like the flow for a prospect, the flow for the existing customer begins with the Diagnostic preparation. In this case, however, the sales team uses the guidance to explain the capabilities and features of the new version of the corresponding Microsoft Dynamics solution. When the customer expresses interest in moving their existing solution to the current version of the solution, the next step is the Upgrade Assessment DA service. Assessing the upgrade requirements The services delivery team has two primary objectives when executing the Upgrade Assessment DA service. First, the delivery team assesses the current solution to determine the impact of the proposed upgrade. Second, they determine the optimal approach to upgrade the solution to the current version. The Upgrade Assessment DA service begins with the solution delivery team meeting with the customer to understand the requirements for the upgrade. The solution delivery team is usually comprised of solution and/or service sales executives as well as solution architects and senior application consultants to provide real-life perspectives to the customer. Sure Step provides product-specific questionnaires that can be leveraged for the Upgrade Assessment exercise, including upgrade questionnaires for Microsoft Dynamics AX, CRM, CRM Online, GP, NAV, and SL. In future releases, the Upgrade questionnaires for AX may be replaced by the Upgrade Analysis tool from the new Microsoft Dynamics R&D Lifecycle Services. In the next step, the solution architect and/or application consultants review the configurations, customizations, integrations, physical infrastructure, and system architecture of the customer's existing solution. The team then proceeds to highlight those requirements that can be met by the new feature enhancements and determine whether there are any customizations that may no longer be necessary in the new product version. The team also reviews the customizations that will need to be promoted to the upgraded solution and identifies any associated complexities and risks involved in upgrading the solution. Finally, the team will clearly delineate those requirements that are met by current functionality and those that require implementation of new functionality. For the new functionality, the delivery team can avail of the corresponding product questionnaires from the Requirements and Process Review DA service. The last step in the Upgrade Assessment DA service is to agree upon the delivery approach for the upgrade. If no new functionality is deemed necessary as part of the upgrade, the solution can use the Technical Upgrade project type guidance, workflow, and templates. On the other hand, if a new functionality is deemed necessary, it is recommended that you use a phased approach, in which the first release is a Technical Upgrade to bring the solution to the current product version, and then the ensuing release or releases implement the new functionality using the other Sure Step project types (Rapid, Standard, Enterprise, or Agile). Applying the other Decision Accelerator Offerings services to upgrade engagements If the upgrade is strictly to promote the solution to a current, supported release of the product, the solution delivery team can skip the Fit Gap and Solution Blueprint exercise and go to the Architecture Assessment DA service to determine the new hardware and infrastructure requirements and the Scoping Assessment DA service to estimate the effort for the upgrade. The team may also choose to combine all these services into a single offering and just use the templates and tools from the other offerings to provide the customer with a Statement of Work and Upgrade estimate. If the upgrade is going to introduce a new functionality, depending on the magnitude of the new requirements, the customer and sales teams may deem it necessary to execute or combine the Fit Gap and Solution Blueprint, Architecture Assessment, and Scoping Assessment DA services. This ensures that a proper blueprint, system architecture, and overall release approach is collectively discussed and agreed upon by both parties. In both cases, the Proof of Concept DA and Business Case DA services may not be necessary, although depending on the scope of the new functionality being introduced in the upgrade, the customer and sales teams may decide to use the Business Case tools to ensure that project justification is established. After the completion of the necessary DA services, the sales team can proceed to the Proposal Generation activity to establish the Project Charter and Project Plan. The next step is then to complete the sale in the Final Licensing and Services Agreement activity, including agreeing upon the new terms of the product licenses and the Statement of Work for the solution upgrade. Finally, the delivery team is mobilized in the Project Mobilization activity to ensure that the upgrade engagement is kicked off smoothly. Supporting the customer's buying cycle The Sure Step Diagnostic phase is designed to help the seller in the solution provider organizations and the buyer in the customer organizations. We also covered the applicability of the phase to the seller; in this article, we will talk about how the customer's due diligence efforts are enabled with a thorough process for selecting the right solution to meet their vision and requirements. We discussed the stages that correspond to the customer's buying cycle. The following diagram shows how the same Sure Step Diagnostic phase activities and Decision Accelerator Offerings that we applied to the solution selling process also align with the phases of the customer's buying cycle: Phase I: Need Determination The Solution Overview activity The Requirements and Process Review Decision Accelerator service Phase II: Alternatives Evaluation The Fit Gap and Solution Blueprint Decision Accelerator service The Architecture Assessment Decision Accelerator service The Scoping Assessment Decision Accelerator service Phase III: Risk Evaluation The Proof of Concept Decision Accelerator service The Business Case Decision Accelerator service The Proposal Generation activity Let's first begin by addressing the application of the Decision Accelerator Offering and its services to a customer's perspective. From a seller's perspective, the term offering can be viewed as a sellable unit. But the term Decision Accelerator on the other hand, extends beyond the seller to the customer, as the intent of these units is to help them get expedient answers to their questions and move their decision making process forward in a logical and structured manner. In that context, the Decision Accelerator Offerings term is very much applicable to the customer as well. The following sections will discuss the alignment of the activities and Decision Accelerator Offering services to the customer's buying cycle. If the reader has not already done so, they are encouraged to review the previous sections to understand the constructs of the Decision Accelerator Offering services as that is not repeated in this article. Defining organizational needs The buyer starts their Need Determination phase by understanding the organizational pain points and gathering information on solutions available in the marketplace. The guidance links to additional websites and other information sources in the Diagnostic preparation activity can help address the solution information gathering effort. If the customer's organization operates in an industry covered by Sure Step, they can also gain additional information on how the solution relates to their specific needs in the industry sections. While the guidance in the Diagnostic preparation activity provides the customer with the external awareness of available solutions, the Sure Step Requirements and Process Review Decision Accelerator service facilitates the customer's understanding of their own internal needs. Using the role-tailored questionnaire templates in this offering, Subject Matter Experts (SMEs) from the customer team can work through "a-day-in-the-life-of" scenarios for each of the roles so that they can quantify the departmental and organizational needs from a user perspective, rather than from only a product perspective. Customers can also use detailed process maps as a starting point, especially the new BPM tool to begin visualizing the organization's workflow with the new solution. Again, this helps the customer describe their needs from a user's perspective. Ultimately, the success or failure of a solution is determined by how applicable or pertinent it is to the user, so this point cannot be overemphasized. The documentation of the requirements and to-be processes forms the basis for the future solution vision. Depending on how they are developed, the customer organization can leverage these documents to conduct a thorough evaluation of the solution alternatives and select the best solution to meet their needs. Determining the right solution After the needs are determined, the buyer begins evaluations of the solution alternatives. This is where the Sure Step Fit Gap and Solution Blueprint, Architecture Assessment, and Scoping Assessment Decision Accelerator services can help the customer determine whether the Microsoft Dynamics solution is the right one for them. As discussed in the earlier section, the Fit Gap and Solution Blueprint DA exercise begins by determining the Degree of Fit of the solution to each of the requirements. Some customers may desire a higher value of the Degree of Fit so as to minimize customizations, while others may be operating in a specialized environment that necessitates a fairly customized solution, and as such, they may be comfortable with a lower value of the Degree of Fit. However, in both the scenarios, the customer will want to ensure that their TCO for the solution is acceptable. Following the determination of the solution fit, the customer SMEs will work with the solution provider to develop the blueprint for the future solution. The solution blueprint is typically presented to the customer's executive or business sponsor. As such, the document should be written in business language and should clearly explain how the business needs or pains will be met or resolved by the proposed solution. Armed with the solution blueprint, the buyer then obtains other key information to evaluate whether the solution meets their cost criteria. The Architecture Assessment DA service will provide the customer with the proposed hardware and architecture, with which the customer's procurement department can determine the physical infrastructure costs. Should the customer have any concerns regarding performance, scalability, and reliability of the solution, they can also request more technical validation from the service provider by requesting more detailed analysis in the corresponding areas. Finally, the Scoping Assessment DA service provides the customer business sponsor with the effort estimate and the associated costs for solution delivery. This exercise also provides the customer with the understanding of the overall approach to delivering the solution, including timelines and projected resources, roles, and responsibilities. Understanding and mitigating risks In the last phase of their buying cycle, the customer will want assurances that the projected solution benefits far outweigh the associated risks. The Sure Step Proof of Concept Decision Accelerator service can help allay any specific concerns for the customer's SMEs or departmental leads around a certain area of the solution. The solution delivery team will set up, configure, and customize the solution and will use customer data where possible to show that the application of the solution matches the customer's requirements. Any solution efforts executed in this offering are then carried over to the implementation and become the starting point for solution delivery. The Sure Step Business Case Decision Accelerator service also helps the customer in this phase of their buying cycle, but more from the perspective of managing the executive and organizational buy-in for the solution. Using an independent analyst-developed ROI tool, this service can help the customer team justify the acquisition of the solution to other key stakeholders in the organization such as the CEO, CFO, or the board of directors. This can be a key step to counter organizational politics, and it can also be very important during the inevitable ebbs and flows of a solution delivery cycle. Finally, the Sure Step Proposal Generation activity provides the customer sponsor and customer project manager with the overall project charter and project plan, ensuring that they have clear documentation of what has been agreed upon between the buyer and the seller to avoid any assumptions or misunderstandings down the line. The project charter will also identify the risks associated with solution delivery and should outline a mitigation strategy for each of them. The project charter developed by the solution provider may also note any dependencies or assumptions owned by the customer; the customer should ensure that they have the necessary resources in place so that these dependencies do not become impediments to the delivery team. Approach to upgrade existing solutions Similar to the evaluation process for a new solution, the Sure Step Diagnostic phase also supports the due diligence process for a current customer looking to upgrade their solution. The following diagram shows a very similar flow to the new solution evaluation, with the only difference being that the Upgrade Assessment DA service now replaces the Requirements and Process Review DA service: As discussed in the earlier section, the Sure Step Upgrade Assessment Decision Accelerator service captures the business needs for the customer to change or enhance their current solution and determines the best approach to upgrade to the latest version of the solution. If the current solution includes customizations that may no longer be deemed necessary because of new features, the delivery team will identify this. They will also evaluate the complexity for the overall upgrade as well as the release process for the upgrade. The Upgrade Assessment DA exercise findings will also dictate the degree to which the customer should undertake the Fit Gap and Solution Blueprint, Architecture Assessment, Scoping Assessment, Proof of Concept, and Business Case Decision Accelerator services. Depending upon the magnitude of the new functionality desired, the customer sponsor and SMEs can decide to skip or combine the services as necessary. Regardless of how the DA Offering services are utilized, the project charter and project plan should be developed for the customer in the Proposal Generation activity. Summary This article discussed the activities and covered in detail the Decision Accelerator Offerings. It also spoke about the ways in which the Diagnostic phase sets the stage for quality implementation by outlining the risks involved. We discuss the selection of the right approach for the deployment as well as the parts that will be played by both the partner and the customer teams. Resources for Article: Further resources on this subject: An Overview of Microsoft Sure Step [Article] Foreword by Microsoft Dynamics Sure Step Practitioners [Article] Upgrading with Microsoft Sure Step [Article]
Read more
  • 0
  • 0
  • 1180