Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-communicating-server-using-google-web-toolkit-rpc
Packt
19 Jan 2011
5 min read
Save for later

Communicating with Server using Google Web Toolkit RPC

Packt
19 Jan 2011
5 min read
  Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible  The Graphical User Interface (GUI) resides in the client side of the application. This article introduces the communication between the server and the client, where the client (GUI) will send a request to the server, and the server will respond accordingly. In GWT, the interaction between the server and the client is made through the RPC mechanism. RPC stands for Remote Procedure Call. The concept is that there are some methods in the server side, which are called by the client at a remote location. The client calls the methods by passing the necessary arguments, and the server processes them, and then returns back the result to the client. GWT RPC allows the server and the client to pass Java objects back and forth. RPC has the following steps: Defining the GWTService interface: Not all the methods of the server are called by the client. The methods which are called remotely by the client are defined in an interface, which is called GWTService. Defining the GWTServiceAsync interface: Based on the GWTService interface, another interface is defined, which is actually an asynchronous version of the GWTService interface. By calling the asynchronous method, the caller (the client) is not blocked until the method completes the operation. Implementing the GWTService interface: A class is created where the abstract method of the GWTService interface is overridden. Calling the methods: The client calls the remote method to get the server response. Creating DTO classes In this application, the server and the client will pass Java objects back and forth for the operation. For example, the BranchForm will request the server to persist a Branch object, where the Branch object is created and passed to server by the client, and the server persists the object in the server database. In another example, the client will pass the Branch ID (as an int), the server will find the particular Branch information, and then send the Branch object to the client to be displayed in the branch form. So, both the server and client need to send or receive Java objects. We have already created the JPA entity classes and the JPA controller classes to manage the entity using the Entity Manager. But the JPA class objects are not transferable over the network using the RPC. JPA classes will just be used by the server on the server side. For the client side (to send and receive objects), DTO classes are used. DTO stands for Data Transfer Object. DTO is simply a transfer object which encapsulates the business data and transfers it across the network. Getting ready Create a package com.packtpub.client.dto, and create all the DTO classes in this package. How to do it... The steps required to complete the task are as follows: Create a class BranchDTO that implements the Serializable interface: public class BranchDTO implements Serializable Declare the attributes. You can copy the attribute declaration from the entity classes. But in this case, do not include the annotations: private Integer branchId; private String name; private String location Define the constructors, as shown in the following code: public BranchDTO(Integer branchId, String name, String location) { this.branchId = branchId; this.name = name; this.location = location; } public BranchDTO(Integer branchId, String name) { this.branchId = branchId; this.name = name; } public BranchDTO(Integer branchId) { this.branchId = branchId; } public BranchDTO() { } To generate the constructors automatically in NetBeans, right-click on the code, select Insert Code | Constructor, and then click on Generate after selecting the attribute(s). Define the getter and setter: public Integer getBranchId() { return branchId; } public void setBranchId(Integer branchId) { this.branchId = branchId; } public String getLocation() { return location; } public void setLocation(String location) { this.location = location; } public String getName() { return name; } public void setName(String name) { this.name = name; } To generate the setter and getter automatically in NetBeans, right-click on the code, select Insert Code | Getter and Setter…, and then click on Generate after selecting the attribute(s). Mapping entity classes and DTOs In RPC, the client will send and receive DTOs, but the server needs pure JPA objects to be used by the Entity Manager. That's why, we need to transform from DTO to JPA entity class and vice versa. In this recipe, we will learn how to map the entity class and DTO. Getting ready Create the entity and DTO classes. How to do it... Open the Branch entity class and define a constructor with a parameter of type BranchDTO. The constructor gets the properties from the DTO and sets them in its own properties: public Branch(BranchDTO branchDTO) { setBranchId(branchDTO.getBranchId()); setName(branchDTO.getName()); setLocation(branchDTO.getLocation()); } This constructor will be used to create the Branch entity class object from the BranchDTO object. In the same way, the BranchDTO object is constructed from the entity class object, but in this case, the constructor is not defined. Instead, it is done where it is required to construct DTO from the entity class. There's more... Some third-party libraries are available for automatically mapping entity class and DTO, such as Dozer and Gilead. For details, you may visit http://dozer.sourceforge.net/ and http://noon.gilead.free.fr/gilead/. Creating the GWT RPC Service In this recipe, we are going to create the GWTService interface, which will contain an abstract method to add a Branch object to the database. Getting ready Create the Branch entity class and the DTO class.  
Read more
  • 0
  • 0
  • 1689

article-image-working-entities-google-web-toolkit-2
Packt
19 Jan 2011
9 min read
Save for later

Working with Entities in Google Web Toolkit 2

Packt
19 Jan 2011
9 min read
  Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on GWT, see here.) Finding an entity In this recipe, we are going to write the code to find an entity. From the client side, the ID of the entity will be passed to the server; the server will find the entity in the database using the JPA controller class, and then return the entity to the client in order to display it. How to do it... Declare the following method in the GWTService interface: public BranchDTO findBranch(int branchId); Declare the asynchronous version of the above method in GWTServiceAsync interface public void findBranch(int branchId, AsyncCallback<BranchDTO> asyncCallback); Implement this method in GWTServiceImpl class @Override public BranchDTO findBranch(int branchId) { Branch branch=branchJpaController.findBranch(branchId); BranchDTO branchDTO=null; if(branch!=null) { branchDTO=new BranchDTO(); branchDTO.setBranchId(branch.getBranchId()); branchDTO.setName(branch.getName()); branchDTO.setLocation(branch.getLocation()); } return branchDTO; } Create a callback instance in client side (BranchForm in this case) to call this method as shown in the following code: final AsyncCallback<BranchDTO> callbackFind = new AsyncCallback<BranchDTO>() { @Override public void onFailure(Throwable caught) { MessageBox messageBox = new MessageBox(); messageBox.setMessage("An error occured! Cannot complete the operation"); messageBox.show(); clear(); } @Override public void onSuccess(BranchDTO result) { branchDTO=result; if(result!=null) { branchIdField.setValue(""+branchDTO.getBranchId()); nameField.setValue(branchDTO.getName()); locationField.setValue(branchDTO.getLocation()); } else { MessageBox messageBox = new MessageBox(); messageBox.setMessage("No such Branch found"); messageBox.show(); clear(); } } }; Write the event-handling code for the find button as follows: findButton.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { MessageBox inputBox = MessageBox.prompt("Input", "Enter the Branch ID"); inputBox.addCallback(new Listener<MessageBoxEvent>() { public void handleEvent(MessageBoxEvent be) { int branchId = Integer.parseInt(be.getValue()); ((GWTServiceAsync)GWT.create(GWTService.class)). findBranch(branchId,callbackFind); } }); } }); How it works... Here, the steps for calling the RPC method are the same as we had done for the add/save operation. The only difference is the type of the result we have received from the server. We have passed the int branch ID and have received the complete BrachDTO object, from which the values are shown in the branch form. Updating an entity In this recipe, we are going to write the code to update an entity. The client will transfer the DTO of updated object, and the server will update the entity in the database using the JPA controller class. How to do it... Declare the following method in the GWTService interface: public boolean updateBranch(BranchDTO branchDTO); Declare the asynchronous version of this method in the GWTServiceAsync interface: public void updateBranch(BranchDTO branchDTO, AsyncCallback<java.lang.Boolean> asyncCallback); Implement the method in the GWTServiceImpl class: @Override public boolean updateBranch(BranchDTO branchDTO) { boolean updated=false; try { branchJpaController.edit(new Branch(branchDTO)); updated=true; } catch (IllegalOrphanException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } catch (NonexistentEntityException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } catch (Exception ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } return updated; } Create a callback instance for this method in the client side (BranchForm in this case, if it is not created yet): final AsyncCallback<Boolean> callback = new AsyncCallback<Boolean>() { MessageBox messageBox = new MessageBox(); @Override public void onFailure(Throwable caught) { messageBox.setMessage("An error occured! Cannot complete the operation"); messageBox.show(); } @Override public void onSuccess(Boolean result) { if (result) { messageBox.setMessage("Operation completed successfully"); } else { messageBox.setMessage("An error occured! Cannot complete the operation"); } messageBox.show(); } }; Write the event handle code for the update button: updateButton.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { branchDTO.setName(nameField.getValue()); branchDTO.setLocation(locationField.getValue()); ((GWTServiceAsync)GWT.create(GWTService.class)). updateBranch(branchDTO,callback); clear(); } }); How it works... This operation is also almost the same as the add operation shown previously. The difference here is the method of controller class. The method edit of the controller class is used to update an entity. Deleting an entity In this recipe, we are going to write the code to delete an entity. The client will transfer the ID of the object, and the server will delete the entity from the database using the JPA controller class. How to do it... Declare the following method in the GWTService interface public boolean deleteBranch(int branchId); Declare the asynchronous version of this method in GWTServiceAsync interface public void deleteBranch(int branchId, AsyncCallback<java.lang.Boolean> asyncCallback); Implement the method in GWTServiceImpl class @Override public boolean deleteBranch(int branchId) { boolean deleted=false; try { branchJpaController.destroy(branchId); deleted=true; } catch (IllegalOrphanException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } catch (NonexistentEntityException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } return deleted; } Create a callback instance for this method in the client side (BranchForm in this case, if it is not created yet): final AsyncCallback<Boolean> callback = new AsyncCallback<Boolean>() { MessageBox messageBox = new MessageBox(); @Override public void onFailure(Throwable caught) { messageBox.setMessage("An error occured! Cannot complete the operation"); messageBox.show(); } @Override public void onSuccess(Boolean result) { if (result) { messageBox.setMessage("Operation completed successfully"); } else { messageBox.setMessage("An error occured! Cannot complete the operation"); } messageBox.show(); } }; Write the event handling code for the delete button: deleteButton.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { ((GWTServiceAsync)GWT.create(GWTService.class)). deleteBranch(branchDTO.getBranchId(),callback); clear(); } }); Managing a list for RPC Sometimes, we need to transfer a list of objects as java.util.List (or a collection) back and forth between the server and the client. We already know from the preceding recipes that the JPA entity class objects are not transferable directly using RPC. Because of the same reason, any list of the JPA entity class is not transferable directly. To transfer java.util.List using RPC, the list must contain objects from DTO classes only. In this recipe, we will see how we can manage a list for RPC. In our scenario, we can consider two classes—Customer and Sales. The association between these two classes is that one customer makes zero or more sales and one sale is made by one customer. Because of such an association, the customer class contains a list of sales, and the sales class contains a single instance of customer class. For example, we want to transfer the full customer object with the list of sales made by this customer. Let's see how we can make that possible. How to do it... Create DTO classes for Customer and Sales (CustomerDTO and SalesDTO, respectively). In the following table, the required changes in data types are shown for the entity and DTO class attributes. The list in the DTO class contains objects of only the DTO class; on the other hand, the list of the entity class contains objects of entity class. Define the following constructor in the Customer entity class: public Customer(CustomerDTO customerDTO) { setCustomerNo(customerDTO.getCustomerNo()); setName(customerDTO.getName()); setAddress(customerDTO.getAddress()); setContactNo(customerDTO.getContactNo()); List<SalesDTO> salesDTOList=customerDTO.getSalesList(); salesList = new ArrayList<Sales>(); for(int i=0;i<salesDTOList.size();i++) { SalesDTO salesDTO=salesDTOList.get(i); Sales sales=new Sales(salesDTO); salesList.add(sales); } } Define the following constructor in the Sales entity class: public Sales(SalesDTO salesDTO) { setSalesNo(salesDTO.getSalesNo()); setSalesDate(salesDTO.getSalesDate()); setCustomer(new Customer(salesDTO.getCustomer())); // there's more but not relevant for this recipe } How it works... Now in the server side, the entity classes, Customer and Sales, will be used, and in the client side, CustomerDTO and SalesDTO, will be used. Constructors with DTO class type argument are defined for the mapping between entity class and DTO class. But here, the addition is the loop used for creating the list. From the CustomerDTO class, we get a list of SalesDTO. The loop gets one SalesDTO from the list, converts it to Sales, and adds it in the Sales list—that's all. Authenticating a user through username and password In this recipe, we are going to create the necessary methods to authenticate a user through a login process. Getting ready Create the DTO class for the entity class Users. How to do it... Declare the following method in the GWTService interface: public UsersDTO login(String username,String password); Declare the following method in the GWTServiceAsync interface: public void login(String username, String password, AsyncCallback<UsersDTO> asyncCallback); Implement the method in the GWTServiceImpl class: @Override public UsersDTO login(String username, String password) { UsersDTO userDTO = null; UsersJpaController usersJpaController = new UsersJpaController(); Users user = (Users) usersJpaController.findUsers(username); if (user != null) { if (user.getPassword().equals(password)) { userDTO=new UsersDTO(); userDTO.setUserName(user.getUserName()); userDTO.setPassword(user.getPassword()); EmployeeDTO employeeDTO= new EmployeeDTO(user.getEmployee().getEmployeeId()); employeeDTO.setName(user.getEmployee().getName()); userDTO.setEmployeeDTO(employeeDTO); } } return userDTO; } How it works... A username and password are passed to the method. An object of the UsersJpaController class is created to find the Users object based on the given username. If the find method returns null, it means that no such user exists. Otherwise, the password of the Users object is compared with the given password. If both the passwords match, a UsersDTO object is constructed and returned. The client will call this method during the login process. If the client gets null, the client should handle it accordingly, as the username/password is not correct. If it is not null, the user is authenticated. Summary In this article we how we can manage entities in GWT RPC. Specifically, we covered the following: Finding an entity Updating an entity Deleting an entity Managing a list for RPC Authenticating a user through username and password Further resources on this subject: Google Web Toolkit 2: Creating Page Layout [Article] Communicating with Server using Google Web Toolkit RPC [Article] Password Strength Checker in Google Web Toolkit and AJAX [Article] Google Web Toolkit GWT Java AJAX Programming [Book] Google Web Toolkit 2 Application Development Cookbook [Book]
Read more
  • 0
  • 0
  • 1149

article-image-tinkering-around-django-javascript-integration
Packt
18 Jan 2011
9 min read
Save for later

Tinkering Around in Django JavaScript Integration

Packt
18 Jan 2011
9 min read
Minor tweaks and bugfixes Good tinkering can be a process that begins with tweaks and bugfixes, and snowballs from there. Let's begin with some of the smaller tweaks and bugfixes before tinkering further. Setting a default name of "(Insert name here)" Most of the fields on an Entity default to blank, which is in general appropriate. However, this means that there is a zero-width link for any search result which has not had a name set. If a user fills out the Entity's name before navigating away from that page, everything is fine, but it is a very suspicious assumption that all users will magically use our software in whatever fashion would be most convenient for our implementation. So, instead, we set a default name of "(Insert name here)" in the definition of an Entity, in models.py: name = models.TextField(blank = True, default = u'(Insert name here)') Eliminating Borg behavior One variant on the classic Singleton pattern in Gang of Four is the Borg pattern, where arbitrarily many instances of a Borg class may exist, but they share the same dictionary, so that if you set an attribute on one of them, you set the attribute on all of them. At present we have a bug, which is that our views pull all available instances. We need to specify something different. We update the end of ajax_profile(), including a slot for time zones to be used later in this article, to: return render_to_response(u'profile_internal.html', { u'entities': directory.models.Entity.objects.filter( is_invisible = False).order_by(u'name'), u'entity': entity, u'first_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[:directory.settings.INITIAL_STATI], u'gps': gps, u'gps_url': gps_url, u'id': int(id), u'emails': directory.models.Email.objects.filter( entity = entity, is_invisible = False), u'phones': directory.models.Phone.objects.filter( entity = entity, is_invisible = False), u'second_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[directory.settings.INITIAL_STATI:], u'tags': directory.models.Tag.objects.filter(entity = entity, is_invisible = False).order_by(u'text'), u'time_zones': directory.models.TIME_ZONE_CHOICES, u'urls': directory.models.URL.objects.filter(entity = entity, is_invisible = False), }) Likewise, we update homepage(): profile = template.render(Context( { u'entities': directory.models.Entity.objects.filter( is_invisible = False), u'entity': entity, u'first_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[:directory.settings.INITIAL_STATI], u'gps': gps, u'gps_url': gps_url, u'id': int(id), u'emails': directory.models.Email.objects.filter( entity = entity, is_invisible = False), u'phones': directory.models.Phone.objects.filter( entity = entity, is_invisible = False), u'query': urllib.quote(query), u'second_stati':directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[directory.settings.INITIAL_STATI:], u'time_zones': directory.models.TIME_ZONE_CHOICES, u'tags': directory.models.Tag.objects.filter( entity = entity, is_invisible = False).order_by(u'text'), u'urls': directory.models.URL.objects.filter( entity = entity, is_invisible = False), })) Confusing jQuery's load() with html() If we have failed to load a profile in the main search.html template, we had a call to load(""). What we needed was: else { $("#profile").html(""); } $("#profile").load("") loads a copy of the current page into the div named profile. We can improve on this slightly to "blank" contents that include the default header: else { $("#profile").html("<h2>People, etc.</h2>"); } Preventing display of deleted instances In our system, enabling undo means that there can be instances (Entities, Emails, URLs, and so on) which have been deleted but are still available for undo. We have implemented deletion by setting an is_invisible flag to True, and we also need to check before displaying to avoid puzzling behavior like a user deleting an Entity, being told Your change has been saved, and then seeing the Entity's profile displayed exactly as before. We accomplish this by a specifying, for a Queryset .filter(is_invisible = False) where we might earlier have specified .all(), or adding is_invisible = False to the conditions of a pre-existing filter; for instance: def ajax_download_model(request, model): if directory.settings.SHOULD_DOWNLOAD_DIRECTORY: json_serializer = serializers.get_serializer(u'json')() response = HttpResponse(mimetype = u'application/json') if model == u'Entity': json_serializer.serialize(getattr(directory.models, model).objects.filter( is_invisible = False).order_by(u'name'), ensure_ascii = False, stream = response) else: json_serializer.serialize(getattr(directory.models, model).objects.filter(is_invisible = False), ensure_ascii = False, stream = response) return response else: return HttpResponse(u'This feature has been turned off.') In the main view for the profile, we add a check in the beginning so that a (basically) blank result page is shown: def ajax_profile(request, id): entity = directory.models.Entity.objects.filter(id = int(id))[0] if entity.is_invisible: return HttpResponse(u'<h2>People, etc.</h2>') One nicety we provide is usually loading a profile on mouseover for its area of the search result page. This means that users can more quickly and easily scan through drilldown pages in search of the right match; however, there is a performance gotcha for simply specifying an onmouseover handler. If you specify an onmouseover for a containing div, you may get a separate event call for every time the user hovers over an element contained in the div, easily getting 3+ calls if a user moves the mouse over to the link. That could be annoying to people on a VPN connection if it means that they are getting the network hits for numerous needless profile loads. To cut back on this, we define an initially null variable for the last profile moused over: Then we call the following function in the containing div element's onmouseover: PHOTO_DIRECTORY.last_mouseover_profile = null; Then we call the following function in the containing div element's onmouseover: PHOTO_DIRECTORY.mouseover_profile = function(profile) { if (profile != PHOTO_DIRECTORY.last_mouseover_profile) { PHOTO_DIRECTORY.load_profile(profile); PHOTO_DIRECTORY.last_mouseover_profile = profile; PHOTO_DIRECTORY.register_editables(); } } The relevant code from search_internal.html is as follows: <div class="search_result" onmouseover="PHOTO_DIRECTORY.mouseover_profile( {{ result.id }});" onclick="PHOTO_DIRECTORY.click_profile({{ result.id }});"> We usually, but not always, enable this mouseover functionality; not always, because it works out to annoying behavior if a person is trying to edit, does a drag select, mouses over the profile area, and reloads a fresh, non-edited profile. Here we edit the Jeditable plugin's source code and add a few lines; we also perform a second check for if the user is logged in, and offer a login form if so: /* if element is empty add something clickable (if requested) */if (!$.trim($(this).html())) { $(this).html(settings.placeholder);}$(this).bind(settings.event, function(e) { $("div").removeAttr("onmouseover"); if (!PHOTO_DIRECTORY.check_login()) { PHOTO_DIRECTORY.offer_login(); } /* abort if disabled for this element */ if (true === $(this).data('disabled.editable')) { return; } For Jeditable-enabled elements, we can override the placeholder for an empty element at method call, but the default placeholder is cleared when editing begins; overridden placeholders aren't. We override the placeholder with something that gives us a little more control and styling freedom: // publicly accessible defaults $.fn.editable.defaults = { name : 'value', id : 'id', type : 'text', width : 'auto', height : 'auto', event : 'click.editable', onblur : 'cancel', loadtype : 'GET', loadtext : 'Loading...', placeholder:'<span class="placeholder"> Click to add.</span>', loaddata : {}, submitdata : {}, ajaxoptions: {} }; All of this is added to the file jquery.jeditable.js. We now have, as well as an @ajax_login_required decorator, an @ajax_permission_required decorator. We test for this variable in the default postprocessor specified in $.ajaxSetup() for the complete handler. Because Jeditable will place the returned data inline, we also refresh the profile. This occurs after the code to check for an undoable edit and offer an undo option to the user. complete: function(XMLHttpRequest, textStatus) { var data = XMLHttpRequest.responseText; var regular_expression = new RegExp("<!-" + "-# (d+) #-" + "->"); if (data.match(regular_expression)) { var match = regular_expression.exec(data); PHOTO_DIRECTORY.undo_notification( "Your changes have been saved. " + "<a href='JavaScript:PHOTO_DIRECTORY.undo(" + match[1] + ")'>Undo</a>"); } else if (data == '{"not_permitted": true}' || data == "{'not_permitted': true}") { PHOTO_DIRECTORY.send_notification( "We are sorry, but we cannot allow you " + "to do that."); PHOTO_DIRECTORY.reload_profile(); } }, Note that we have tried to produce the least painful of clear message we can: we avoid both saying "You shouldn't be doing that," and a terse, "bad movie computer"-style message of "Access denied" or "Permission denied." We also removed from that method code to call offer_login() if a call came back not authenticated. This looked good on paper, but our code was making Ajax calls soon enough that the user would get an immediate, unprovoked, modal login dialog on loading the page. Adding a favicon.ico In terms of minor tweaks, some visually distinct favicon.ico (http://softpedia.com/ is one of many free sources of favicon.ico files, or the favicon generator at http://tools.dynamicdrive.com/favicon/ which can take an image like your company logo as the basis for an icon) helps your tabs look different at a glance from other tabs. Save a good, simple favicon in static/favicon.ico. The icon may not show up immediately when you refresh, but a good favicon makes it slightly easier for visitors to manage your pages among others that they have to deal with. It shows up in the address bar, bookmarks, and possibly other places. This brings us to the end of the minor tweaks; let us look at two slightly larger additions to the directory.
Read more
  • 0
  • 0
  • 4001
Visually different images

article-image-facebook-accessing-graph-api
Packt
18 Jan 2011
8 min read
Save for later

Facebook: Accessing Graph API

Packt
18 Jan 2011
8 min read
  Facebook Graph API Development with Flash Build social Flash applications fully integrated with the Facebook Graph API Build your own interactive applications and games that integrate with Facebook Add social features to your AS3 projects without having to build a new social network from scratch Learn how to retrieve information from Facebook's database A hands-on guide with step-by-step instructions and clear explanation that encourages experimentation and play Accessing the Graph API through a Browser We'll dive right in by taking a look at how the Graph API represents the information from a public Page. When I talk about a Page with a capital P, I don't just mean any web page within the Facebook site; I'm referring to a specific type of page, also known as a public profile. Every Facebook user has their own personal profile; you can see yours by logging in to Facebook and clicking on the "Profile" link in the navigation bar at the top of the site. Public profiles look similar, but are designed to be used by businesses, bands, products, organizations, and public figures, as a way of having a presence on Facebook. This means that many people have both a personal profile and a public profile. For example, Mark Zuckerberg, the CEO of Facebook, has a personal profile at http://www.facebook.com/zuck and a public profile (a Page) at http://www.facebook.com/markzuckerberg. This way, he can use his personal profile to keep in touch with his friends and family, while using his public profile to connect with his fans and supporters. There is a second type of Page: a Community Page. Again, these look very similar to personal profiles; the difference is that these are based on topics, experience, and causes, rather than entities. Also, they automatically retrieve information about the topic from Wikipedia, where relevant, and contain a live feed of wall posts talking about the topic. All this can feel a little confusing – don't worry about it! Once you start using it, it all makes sense. Time for action – loading a Page Browse to http://www.facebook.com/PacktPub to load Packt Publishing's Facebook Page. You'll see a list of recent wall posts, an Info tab, some photo albums (mostly containing book covers), a profile picture, and a list of fans and links. That's how website users view the information. How will our code "see" it? Take a look at how the Graph API represents Packt Publishing's Page by pointing your web browser at https://graph.facebook.com/PacktPub. This is called a Graph URL – note that it's the same URL as the Page itself, but with a secure https connection, and using the graph sub domain, rather than www. What you'll see is as follows: { "id": "204603129458", "name": "Packt Publishing", "picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/ hs302.ash1/23274_204603129458_7460_s.jpg", "link": "http://www.facebook.com/PacktPub", "category": "Products_other", "username": "PacktPub", "company_overview": "Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.", "fan_count": 412 } What just happened? You just fetched the Graph API's representation of the Packt Publishing Page in your browser. The Graph API is designed to be easy to pick up – practically self-documenting – and you can see that it's a success in that respect. It's pretty clear that the previous data is a list of fields and their values. The one field that's perhaps not clear is id; this number is what Facebook uses internally to refer to the Page. This means Pages can have two IDs: the numeric one assigned automatically by Facebook, and an alphanumeric one chosen by the Page's owner. The two IDs are equivalent: if you browse to https://graph.facebook.com/204603129458, you'll see exactly the same data as if you browse to https://graph.facebook.com/PacktPub. Have a go hero – exploring other objects Of course, the Packt Publishing Page is not the only Page you can explore with the Graph API in your browser. Find some other Pages through the Facebook website in your browser, then, using the https://graph.facebook.com/id format, take a look at their Graph API representations. Do they have more information, or less? Next, move on to other types of Facebook objects: personal profiles, events, groups. For personal profiles, the id may be alphanumeric (if the person has signed up for a custom Facebook Username at http://www.facebook.com/username/), but in general the id will be numeric, and auto-assigned by Facebook when the user signed up. For certain types of objects (like photo albums), the value of id will not be obvious from the URL within the Facebook website. In some cases, you'll get an error message, like: { "error": { "type": "OAuthAccessTokenException", "message": "An access token is required to request this resource." } } Accessing the Graph API through AS3 Now that you've got an idea of how easy it is to access and read Facebook data in a browser, we'll see how to fetch it in AS3. Time for action – retrieving a Page's information in AS3 Set up the project. Check that the project compiles with no errors (there may be a few warnings, depending on your IDE). You should see a 640 x 480 px SWF, all white, with just three buttons in the top-left corner: Zoom In, Zoom Out, and Reset View: This project is the basis for a Rich Internet Application (RIA) that will be able to explore all of the information on Facebook using the Graph API. All the code for the UI is in place, just waiting for some Graph data to render. Our job is to write code to retrieve the data and pass it on to the renderers. I'm not going to break down the entire project and explain what every class does. What you need to know at the moment is a single instance of the controllers. CustomGraphContainerController class is created when the project is initialized, and it is responsible for directing the flow of data to and from Facebook. It inherits some useful methods for this purpose from the controllers.GCController class; we'll make use of these later on. Open the CustomGraphContainerController class in your IDE. It can be found in srccontrollersCustomGraphContainerController.as, and should look like the listing below: package controllers { import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); } } } The first thing we'll do is grab the Graph API's representation of Packt Publishing's Page via a Graph URL, like we did using the web browser. For this we can use a URLLoader. The URLLoader and URLRequest classes are used together to download data from a URL. The data can be text, binary data, or URL-encoded variables. The download is triggered by passing a URLRequest object, whose url property contains the requested URL, to the load() method of a URLLoader. Once the required data has finished downloading, the URLLoader dispatches a COMPLETE event. The data can then be retrieved from its data property. Modify CustomGraphContainerController.as like so (the highlighted lines are new): package controllers { import flash.events.Event; import flash.net.URLLoader; import flash.net.URLRequest; import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest(); //Specify which Graph URL to load request.url = "https://graph.facebook.com/PacktPub"; loader.addEventListener(Event.COMPLETE, onGraphDataLoadComplete); //Start the actual loading process loader.load(request); } private function onGraphDataLoadComplete(a_event:Event):void { var loader:URLLoader = a_event.target as URLLoader; //obtain whatever data was loaded, and trace it var graphData:String = loader.data; trace(graphData); } } } All we're doing here is downloading whatever information is at https://graph.facebook.com/PackPub and tracing it to the output window. Test your project, and take a look at your output window. You should see the following data: {"id":"204603129458","name":"Packt Publishing","picture":"http:// profile.ak.fbcdn.net/hprofile-ak-snc4/hs302. ash1/23274_204603129458_7460_s.jpg","link":"http://www.facebook. com/PacktPub","category":"Products_other","username":"PacktPub", "company_overview":"Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.","fan_count":412} If you get an error, check that your code matches the previously mentioned code. If you see nothing in your output window, make sure that you are connected to the Internet. If you still don't see anything, it's possible that your security settings prevent you from accessing the Internet via Flash, so check those.  
Read more
  • 0
  • 0
  • 6275

article-image-getting-started-alfresco-records-management-module
Packt
18 Jan 2011
7 min read
Save for later

Getting Started with the Alfresco Records Management Module

Packt
18 Jan 2011
7 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix     The Alfresco stack Alfresco software was designed for enterprise, and as such, supports a variety of different stack elements. Supported Alfresco stack elements include some of the most widely used operating systems, relational databases, and application servers. The core infrastructure of Alfresco is built on Java. This core provides the flexibility for the server to run on a variety of operating systems, like Microsoft Windows, Linux, Mac OS, and Sun Solaris. The use of Hibernate allows Alfresco to map objects and data from Java into almost any relational database. The databases that the Enterprise version of Alfresco software is certified to work with include Oracle, Microsoft SQL Server, MySQL, PostgresSQL, and DB2. Alfresco also runs on a variety of Application Servers that include Tomcat, JBoss, WebLogic, and WebSphere. Other relational databases and application servers may work as well, although they have not been explicitly tested and are also not supported. Details of which Alfresco stack elements are supported can be found on the Alfresco website: http://www.alfresco.com/services/subscription/supported-platforms/3-x/. Depending on the target deployment environment, different elements of the Alfresco stack may be favored over others. The exact configuration details for setting up the various stack element options is not discussed in this book. You can find ample discussion and details on the Alfresco wiki on how to configure, set up, and change the different stack elements. The version-specific installation and setup guides provided by Alfresco also contain very detailed information. The example description and screenshots given in this article are based on the Windows operating system. The details may differ for other operating systems, but you will find that the basic steps are very similar. Additional information on the internals of Alfresco software can be found on the Alfresco wiki at http://wiki.alfresco.com/wiki/Main_Page. Alfresco software As a first step to getting Alfresco Records Management up and running, we need to first acquire the software. Whether you plan to use either the Enterprise or the Community version of Alfresco, you should note that the Records Management module was not available until late 2009. The Records Management module was first certified with the 3.2 release of Alfresco Share. The first Enterprise version of Alfresco that supported Records Management was version 3.2R, which was released in February 2010. Make sure the software versions are compatible It is important to note that there was an early version of Records Management that was built for the Alfresco JSF-based Explorer client. That version was not certified for DoD 5015.2 compliance and is no longer supported by Alfresco. In fact, the Alfresco Explorer version of Records Management is not compatible with the Share version of Records Management, and trying to use the two implementations together can result in corrupt data. It is also important to make sure that the version of the Records Management module that you use matches the version of the base Alfresco Share software. For example, trying to use the Enterprise version of Records Management on a Community install of Alfresco will lead to problems, even if the version numbers are the same. The 3.3 Enterprise version of Records Management, as another example, is also not fully compatible with the 3.2R Enterprise version of Alfresco software. Downloading the Alfresco software The easiest way to get Alfresco Records Management up and running is by doing a fresh install of the latest available Alfresco software. Alfresco Community The Community version of Alfresco is a great place to get started. Especially if you are just interested in evaluating if Alfresco software meets your needs, and with no license fees to worry about, there's really nothing to lose in going this route. Since Alfresco Community software is constantly in the "in development" state and is not as rigorously tested, it tends to not be as stable as the Enterprise version. But, in terms of the Records Management module for the 3.2+ version releases of the software, the Community implementation is feature-complete. This means that the same Records Management features in the Enterprise version are also found in the Community version. The caveat with using the Community version is that support is only available from the Alfresco community, should you run across a problem. The Enterprise release also includes support from the Alfresco support team and may have bug fixes or patches not yet available for the community release. Also of note is the fact that there are other repository features beyond those of Records Management features, especially in the area of scalability, which are available only with the Enterprise release. Building from source code It is possible to get the most recent version of the Alfresco Community software by getting a snapshot copy of the source code from the publicly accessible Alfresco Subversion source code repository. A version of the software can be built from a snapshot of the source code taken from there. But unless you are anxiously waiting for a new Alfresco feature or bug fix and need to get your hands immediately on a build with that new code included as part of it, for most people, building from source is probably not the route to go. Building from source code can be time consuming and error prone. The final software version that you build can often be very buggy or unstable due to code that has been checked-in prematurely or changes that might be in the process of being merged into the Community release, but which weren't completely checked-in at the time you updated your snapshot of the code base. If you do decide that you'd like to try to build Alfresco software from source code, details on how to get set up to do that can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Alfresco_SVN_Development_Environment. Download a Community version snapshot build Builds of snapshots of the Alfresco Community source code are periodically taken and made available for download. Using a pre-built Community version of Alfresco software saves you much hassle and headaches from not having to do the build from scratch. While not thoroughly tested, the snapshot Community builds have been tested sufficiently so that they tend to be stable enough to see most of the functionality available for the release, although not everything may be working completely. Links to the most recent Alfresco Community version builds can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Download_Community_Edition. Alfresco Enterprise The alternative to using Alfresco open source Community software is the Enterprise version of Alfresco. For most organizations, the fully certified Enterprise version of Alfresco software is the recommended choice. The Enterprise version of Alfresco software has been thoroughly tested and is fully supported. Alfresco customers and partners have access to the most recent Enterprise software from the Alfresco Network site: http://network.alfresco.com/. Trial copies of Alfresco Enterprise software can be downloaded from the Alfresco site: http://www.alfresco.com/try/. Time-limited access to on-demand instances of Alfresco software are also available and are a great way to get a good understanding of how Alfresco software works.
Read more
  • 0
  • 0
  • 2284

article-image-replication-mysql-admin
Packt
17 Jan 2011
10 min read
Save for later

Replication in MySQL Admin

Packt
17 Jan 2011
10 min read
Replication is an interesting feature of MySQL that can be used for a variety of purposes. It can help to balance server load across multiple machines, ease backups, provide a workaround for the lack of fulltext search capabilities in InnoDB, and much more. The basic idea behind replication is to reflect the contents of one database server (this can include all databases, only some of them, or even just a few tables) to more than one instance. Usually, those instances will be running on separate machines, even though this is not technically necessary. Traditionally, MySQL replication is based on the surprisingly simple idea of repeating the execution of all statements issued that can modify data—not SELECT—against a single master machine on other machines as well. Provided all secondary slave machines had identical data contents when the replication process began, they should automatically remain in sync. This is called Statement Based Replication (SBR). With MySQL 5.1, Row Based Replication (RBR) was added as an alternative method for replication, targeting some of the deficiencies SBR brings with it. While at first glance it may seem superior (and more reliable), it is not a silver bullet—the pain points of RBR are simply different from those of SBR. Even though there are certain use cases for RBR, all recipes in this chapter will be using Statement Based Replication. While MySQL makes replication generally easy to use, it is still important to understand what happens internally to be able to know the limitations and consequences of the actions and decisions you will have to make. We assume you already have a basic understanding of replication in general, but we will still go into a few important details. Statement Based Replication SBR is based on a simple but effective principle: if two or more machines have the same set of data to begin with, they will remain identical if all of them execute the exact same SQL statements in the same order. Executing all statements manually on multiple machines would be extremely tedious and impractical. SBR automates this process. In simple terms, it takes care of sending all the SQL statements that change data on one server (the master) to any number of additional instances (the slaves) over the network. The slaves receiving this stream of modification statements execute them automatically, thereby effectively reproducing the changes the master machine made to its data originally. That way they will keep their local data files in sync with the master's. One thing worth noting here is that the network connection between the master and its slave(s) need not be permanent. In case the link between a slave and its master fails, the slave will remember up to which point it had read the data last time and will continue from there once the network becomes available again. In order to minimize the dependency on the network link, the slaves will retrieve the binary logs (binlogs) from the master as quickly as they can, storing them on their local disk in files called relay logs. This way, the connection, which might be some sort of dial-up link, can be terminated much sooner while executing the statements from the local relay-log asynchronously. The relay log is just a copy of the master's binlog. The following image shows the overall architecture: Filtering In the image you can see that each slave may have its individual configuration on whether it executes all the statements coming in from the master, or just a selection of those. This can be helpful when you have some slaves dedicated to special tasks, where they might not need all the information from the master. All of the binary logs have to be sent to each slave, even though it might then decide to throw away most of them. Depending on the size of the binlogs, the number of slaves and the bandwidth of the connections in between, this can be a heavy burden on the network, especially if you are replicating via wide area networks. Even though the general idea of transferring SQL statements over the wire is rather simple, there are lots of things that can go wrong, especially because MySQL offers some configuration options that are quite counter-intuitive and lead to hard-to-find problems. For us, this has become a best practice: "Only use qualified statements and replicate-*-table configuration options for intuitively predictable replication!" What this means is that the only filtering rules that produce intuitive results are those based on the replicate-do-table and replicate-ignore-table configuration options. This includes those variants with wildcards, but specifically excludes the all-database options like replicate-do-db and replicate-ignore-db. These directives are applied on the slave side on all incoming relay logs. The master-side binlog-do-* and binlog-ignore-* configuration directives influence which statements are sent to the binlog and which are not. We strongly recommend against using them, because apart from hard-to-predict results they will make the binlogs undesirable for server backup and restore. They are often of limited use anyway as they do not allow individual configurations per slave but apply to all of them. Setting up automatically updated slaves of a server based on a SQL dump In this recipe, we will show you how to prepare a dump file of a MySQL master server and use it to set up one or more replication slaves. These will automatically be updated with changes made on the master server over the network. Getting ready You will need a running MySQL master database server that will act as the replication master and at least one more server to act as a replication slave. This needs to be a separate MySQL instance with its own data directory and configuration. It can reside on the same machine if you just want to try this out. In practice, a second machine is recommended because this technique's very goal is to distribute data across multiple pieces of hardware, not place an even higher burden on a single one. For production systems you should pick a time to do this when there is a lighter load on the master machine, often during the night when there are less users accessing the system. Taking the SQL dump uses some extra resources, but unless your server is maxed out already, the performance impact usually is not a serious problem. Exactly how long the dump will take depends mostly on the amount of data and speed of the I/O subsystem. You will need an administrative operating system account on the master and the slave servers to edit the MySQL server configuration files on both of them. Moreover, an administrative MySQL database user is required to set up replication. We will just replicate a single database called sakila in this example. Replicating more than one database In case you want to replicate more than one schema, just add their names to the commands shown below. To replicate all of them, just leave out any database name from the command line. How to do it... At the operating system level, connect to the master machine and open the MySQL configuration file with a text editor. Usually it is called my.ini on Windows and my.cnf on other operating systems. On the master machine, make sure the following entries are present and add them to the [mysqld] section if not already there: server-id=1000 log-bin=master-bin If one or both entries already exist, do not change them but simply note their values. The log-bin setting need not have a value, but can stand alone as well. Restart the master server if you need to modify the configuration. Create a user account on the master that can be used by the slaves to connect: master> grant replication slave on *.* to 'repl'@'%' identified by 'slavepass'; Using the mysqldump tool included in the default MySQL install, create the initial copy to set up the slave(s): $ mysqldump -uUSER -pPASS --master-data --single-transaction sakila > sakila_master.sql Transfer the sakila_master.sql dump file to each slave you want to set up, for example, by using an external drive or network copy. On the slave, make sure the following entries are present and add them to the [mysqld] section if not present: server-id=1001 replicate-wild-do-table=sakila.% When adding more than one slave, make sure the server-id setting is unique among master and all clients. Restart the slave server. Connect to the slave server and issue the following commands (assuming the data dump was stored in the /tmp directory): slave> create database sakila; slave> use sakila; slave> source /tmp/sakila_master.sql; slave> CHANGE MASTER TO master_host='master.example.com', master_port=3306, master_ user='repl', master_password='slavepass'; slave> START SLAVE; Verify the slave is running with: slave> SHOW SLAVE STATUSG ************************** 1. row *************************** ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... How it works... Some of the instructions discussed in the previous section are to make sure that both master and slave are configured with different server-id settings. This is of paramount importance for a successful replication setup. If you fail to provide unique server-id values to all your server instances, you might see strange replication errors that are hard to debug. Moreover, the master must be configured to write binlogs—a record of all statements manipulating data (this is what the slaves will receive). Before taking a full content dump of the sakila demo database, we create a user account for the slaves to use. This needs the REPLICATION SLAVE privilege. Then a data dump is created with the mysqldump command line tool. Notice the provided parameters --master-data and --single-transaction. The former is needed to have mysqldump include information about the precise moment the dump was created in the resulting output. The latter parameter is important when using InnoDB tables, because only then will the dump be created based on a transactional snapshot of the data. Without it, statements changing data while the tool was running could lead to an inconsistent dump. The output of the command is redirected to the /tmp/sakila_master.sql file. As the sakila database is not very big, you should not see any problems. However, if you apply this recipe to larger databases, make sure you send the data to a volume with sufficient free disk space—the SQL dump can become quite large. To save space here, you may optionally pipe the output through gzip or bzip2 at the cost of a higher CPU load on both the master and the slaves, because they will need to unpack the dump before they can load it, of course. If you open the uncompressed dump file with an editor, you will see a line with a CHANGE MASTER TO statement. This is what --master-data is for. Once the file is imported on a slave, it will know at which point in time (well, rather at which binlog position) this dump was taken. Everything that happened on the master after that needs to be replicated. Finally, we configure that slave to use the credentials set up on the master before to connect and then start the replication. Notice that the CHANGE MASTER TO statement used for that does not include the information about the log positions or file names because that was already taken from the dump file just read in. From here on the slave will go ahead and record all SQL statements sent from the master, store them in its relay logs, and then execute them against the local data set. This recipe is very important because the following recipes are based on this! So in case you have not fully understood the above steps yet, we recommend you go through them again, before trying out more complicated setups.
Read more
  • 0
  • 0
  • 5992
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-roles-and-responsibilities-records-management-implementation-alfresco-3
Packt
17 Jan 2011
10 min read
Save for later

Roles and Responsibilities for Records Management Implementation in Alfresco 3

Packt
17 Jan 2011
10 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) The steering committee To succeed, our Records Management program needs continued commitment from all levels of the organization. A good way to cultivate that commitment is by establishing a steering committee for the records program. From a high level, the steering committee will direct the program, set priorities for it, and assist in making decisions. The steering committee will provide the leadership to ensure that the program is adequately funded, staffed, properly prioritized with business objectives, and successfully implemented. Committee members should know the organization well and be in a position to be both able and willing to make decisions. Once the program is implemented, the steering committee should not be dissolved; it still will play an important function. It will continue to meet and oversee the Records Management program to make sure that it is properly maintained and updated. The Records Management system is not something that can simply be turned on and forgotten. The steering committee should meet regularly, track the progress of the implementation, keep abreast of changes in regulatory controls, and be proactive in addressing the needs of the Records Management program. Key stakeholders The Records Management steering committee should include executives and senior management from core business units such as Compliance, Legal, Finance, IT, Risk Management, Human Resources, and any other groups that will be affected by Records Management. Each of these groups will represent the needs and responsibilities of their respective groups. They will provide input relative to policies and procedures. The groups will work together to develop a priority-sequenced implementation plan that all can agree upon. Creating a committee that is heavily weighted with company executives will visibly demonstrate that our company is strongly committed to the program and it ensures that we will have the right people on board when it is time to make decisions, and that will keep the program on track. The steering committee should also include representatives from Records Management, IT, and users. Alternatively, representatives from these groups can be appointed and, if not members of the steering committee, they should report directly to the steering committee on a regular basis: The Program Contact The Program Contact is the chair of the steering committee. This role is typically held by someone in senior management and is often someone from the technology side of the business, such as the Director of IT. The Program Contact signs off with the final approval on technology deliverables and budget items. The Program Sponsor A key member of the records steering committee is the Program Sponsor or Project Champion. This role is typically held by a senior executive who will be able to represent the records initiative within the organization's executive team. The Sponsor will be able to establish the priority of the records program, relative to other organizational initiatives and be able to persuade the executive team and others in the company of the importance of the records management initiative. Corporate Records Manager Another key role of the steering committee is the Corporate Records Manager. This role acts as the senior champion for the records program and is responsible for defining the procedures and policies around Records Management. The person in this role will promote the rollout of and the use of the records program. They will work with each of the participating departments or groups, cultivating local champions for Records Management within each of those groups. The Corporate Records Manager must effectively communicate with business units to explain the program to all staff members and work with the various business units to collect user feedback so that those ideas can be incorporated into the planning process. The Corporate Records Manager will try to minimize any adverse user impact or disruption. Project Manager The Project Manager typically is on the steering committee or reports directly to it. The Project Manager plans and tracks the implementation of work on the program and ensures that program milestones are met. The person in this role manages both, the details of the system setup and implementation. This Project Manager also manages the staff time spent working on the program tasks. Business Analyst The Business Analyst analyzes business processes and records, and from these, creates a design and plan for the records program implementation. The Business Analyst works closely with the Corporate Records Manager to develop records procedures and provides support for the system during rollout. Systems Administrator The Systems Administrator leads the technical team for supporting the records application. The Systems Administrator specifies and puts into place the hardware required for the records program, the storage space, memory, and CPU capabilities. The person in this role monitors the system performance and backs up the system regularly. The Systems Administrator leads the team to apply software upgrades and to perform system troubleshooting. The Network Administrator The Network Administrator ensures that the network infrastructure is in place for the records program to support the appropriate bandwidth for the server and client workstations that will access the application. The Network Administrator works closely with the Systems Administrator. The Technical Analyst The Technical Analyst is responsible for analyzing the configuration of the records program. The Technical Analyst needs to work closely with the Business Analyst and Corporate Records Manager. The person in this role will specify the classification and structure used for the records program File Plan. They will also specify the classes of documents stored as records in the records application and the associated metadata for those documents. The Records Assistant The Records Assistant assists in the configuration of the records application. Tasks that the Records Assistant will perform include data entry and creating the folder structure hierarchy of the File Plan within the records application based on the specification created by the Technical Analyst. The Records Developer The Records Developer is a software engineer that is assigned to support the implementation of the records program, based on requirements derived by the Business Analyst. The Records Developer may need to edit and update configuration files, often using technologies like XML. The Records Developer may also need to make customizations to the user interface of the application. The Trainer The Trainer will work with end users to ensure that they understand the system and their responsibilities in interacting with it. The trainer typically creates training materials and provides training seminars to users. The Technical Support Specialist The Technical Support Specialist provides support to users on the functioning of the Records Management application. This person is typically an advanced user and is trained to be able to provide guidance in interacting with the application. But more than just the Records Management application, the support specialist should also be well versed in and be able to assist users and answer their questions about records processes and procedures, as well as concepts like retention and disposition of documents. The Technical Support Specialist will, very often, be faced with requests or questions that are really enhancement requests. The support specialist needs to have a good understanding of the scope of the records implementation and be able to distinguish an enhancement request from a defect or bug report. Enhancements should be collected and routed back through the Project Manager and, depending on the nature of the request or problem, possibly even to the Corporate Records Manager or the Steering Committee. Similarly, application defects or bugs that are found should be reported back through to the Project Manager. Bug reports will be prioritized by the Project Manager, as appropriate, assigned to the Technical Developers, or reported to the Systems Integrator or to Alfresco. The Users The Users are the staff members who will use the Records Management application as part of their job. Users are often the key to the success or failure of a records program. Unfortunately, users are one aspect of the project that is often overlooked. Obviously, it is important that the records application be well designed and meet the objectives and requirements set out for it. But if users complain and can't accept it, then the program will be doomed to failure. Users will often be asked to make changes to processes that they have become very comfortable with. Frequent and early communication with users is a must in order to ultimately gain their acceptance and participation. Prior to and during the implementation of the records system, users should receive status updates and explanations from the Corporate Records Manager and also from the Records Manager lead in their business unit. It is important that frequent communications be made with users to ensure their opinions and ideas are heard, and also so that they will learn to be able to most effectively use the records system. Once the application is ready, or better yet, well before the application goes live, users should attend training sessions on proper records-handling behavior; they should experience hands-on training with the application; and they should also be instructed in how best to communicate with the Technical Support Specialist, should they ever have questions or encounter any problems. Alfresco, Consultants, and Systems Integrators Alfresco is the software vendor for Alfresco Records Management, but Alfresco typically does not work directly with customers. We could go at it alone, but more likely, we'll probably choose to work directly with one of Alfresco's System Integration partners or consultants in planning for and setting up our system. Depending on the size of our organization and the available skill set within it, the Systems Integrator can take on as much or as little of the burden for helping us to get up and running with our Records Management program. Almost any of the Technical Team roles discussed in this section, like those of the Analyst and Developer, and even the role of the Project Manager, are ones that can be performed by a Systems Integrator. A list of certified Alfresco Integrators can be found on the Alfresco site: http://www.alfresco.com/partners/search.jsp?t=si A Systems Integrator can bring to our project an important breadth of experience that can help save time and ensure that our project will go smoothly. Alfresco Systems Integration partners know their stuff. They are required to be certified in Alfresco technology and they have worked with Alfresco extensively. They are familiar with best practices and have picked up numerous implementation tips and tricks having worked on similar projects with other clients.
Read more
  • 0
  • 0
  • 6988

article-image-introduction-successful-records-management-implementation-alfresco-3
Packt
14 Jan 2011
15 min read
Save for later

Introduction to Successful Records Management Implementation in Alfresco 3

Packt
14 Jan 2011
15 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations  A preliminary investigation will also give us good information about the types of records we have and roughly how many records we're talking about. We'll also dig deeper into the area of Authority Documents and we'll determine exactly what our obligations are as an organization in complying with them. The data that we collect in the preliminary investigation will provide the basis for us to make a Business Case that we can present to the executives in the organization. It will outline the benefits and advantages of implementing a records system. We also will need to put in place and communicate organization-wide a formal policy that explains concisely the goals of the records program and what it means to the organization. The information covered in this article is important and easily overlooked when starting a Records Management program. We will discuss: The Preliminary Investigation Authority Documents The Steering Committee and Roles in the Records Management Program Making the Business Case for Records Management Project Management Best practices and standards In this article, we will focus on discussing Records Management best practices. Best practices are the processes, methods, and activities that, when applied correctly, can achieve the most repeatable, effective, and efficient results. While an important function of standards is to ensure consistency and interoperability, standards also often provide a good source of information for how to achieve best practice. Much of our discussion here draws heavily on the methodology described in the DIRKS and ISO-15489 standards that describe Records Management best practices. Before getting into a description of best practices though, let's look and see how these two particular standards have come into being and how they relate to other Records Management standards, like the DoD 5015.2 standard. Origins of Records Management Somewhat surprisingly, standards have only existed in Records Management for about the past fifteen years. But that's not to say that prior to today's standards, there wasn't a body of knowledge and written guidelines that existed as best practices for managing records. Diplomatics Actually, the concept of managing records can be traced back a long way. In the Middle Ages in Europe, important written documents from court transactions were recognized as records, and even then, there were issues around establishing authenticity of records to guard against forgery. From those early concerns around authenticity, the science of document analysis called diplomatics came into being in the late 1600s and became particularly important in Europe with the rise of government bureaucracies in the 1800s. While diplomatics started out as something closer to forensic handwriting analysis than Records Management, it gradually established principles that are still important to Records Management today, such as reliability and authenticity. Diplomatics even emphasized the importance of aligning rules for managing records with business processes, and it treated all records the same, regardless of the media that they are stored on. Records Management in the United States Records Management is something that has come into being very slowly in the United States. In fact, Records Management in the United States is really a twentieth century development. It wasn't even until 1930 that 90 percent of all births and deaths in the United States were recorded. The United States National Archives was first established in 1934 to manage only the federal government historical records, but the National Archives quickly became involved in the management of all federal current records. In 1941, a records administration program was created for federal agencies to transfer their historical records to the National Archives. In 1943, the Records Disposal Act authorized the first use of record disposition schedules. In 1946, all agencies in the executive branch of government were ordered as part of Executive Order 9784 to implement Records Management programs. It wasn't until 1949 with the publication of a pamphlet called Public Records Administration, written by an archivist at the National Archives, that the idea of Records Management was beginning to be seen as an activity that is separate and distinct from the long-term archival of records for preservation. Prior to the 1950s in the United States, most businesses did not have a formalized program for records management. However, that slowly began to change as the federal government provided itself as an example for how records should be managed. The 1950 Federal Records Act formalized Records Management in the United States. The Act included ideas about the creation, maintenance, and disposition of records. Perhaps somewhat similar to the dramatic growth in electronic documents that we are seeing today, the 1950s saw a huge increase in the number of paper records that needed to be managed. The growth in the volume of records and the requirements and the responsibilities imposed by the Federal Records Act led to the creation of regional records centers in the United States, and those centers slowly became models for records managers outside of government. In 1955, the second Hoover Commission was tasked with developing recommendations for paperwork management and published a document entitled Guide to Record Retention Requirements in 1955. While not officially sanctioned as a standard, this document, in many ways, served the same purpose. The guide was popular and has been republished frequently since then and has served as an often-used reference by both government and non-government organizations. As late as 1994, a revised version of the guide was printed by the Office of the Federal Register. That same year, in 1955, ARMA International, the international organization for records managers, was founded. ARMA continues through today to provide a forum for records and information managers, both inside and outside the government, to share information about best practices in the area of Records Management. From the 1950s, companies and non-government organizations were becoming more involved with record management policies, and the US federal government continued to drive much of the evolution of Records Management within the United States. In 1976, the Federal Records Act was amended and sections were added that emphasized paperwork reduction and the importance of documenting the recordkeeping process. The concept of the record lifecycle was also described in the amendments to the Act. In 1985, the National Archives was renamed as NARA, the National Archives and Records Administration, finally acknowledging in the name the role the agency plays in managing records as well as being involved in the long-term archival and preservation of documents. However, it wasn't until the 1990s that standards around Records Management began to take shape. In 1993, a government task force in the United States that included NARA, the US Army, and the US Air Force, began to devise processes for managing records that would include both the management of paper and electronic documents. The recommendations of that task force ultimately led to the DoD-5015.2 standard that was first released in 1997. Australia's AS-4390 and DIRKS In parallel to what was happening in the United States, standards for Records Management were also advancing in Australia. AS-4390 Standards Australia issued AS-4390 in 1996, a document that defined the scope of Records Management with recommendations for implementation in both public and private sectors in Australia. This was the first standard issued by any nation, but much of the language in the standard was very specific, making it usable really only within Australia. AS-4390 approached the management of records as a "continuum model" and addressed the "whole extent of the records' existence". DIRKS In 2000, the National Archives of Australia published DIRKS (Design and Implementation of Recordkeeping System), a methodology for implementing AS-4390. The Australian National Archives developed, tested, and successfully implemented the approach, summarizing the methodology for managing records into an eight-step process. The eight steps of the DIRKS methodology include: Organization assessment: Preliminary Investigation Analysis of business activity Identification of records requirements Assess areas for improvement: Assessment of the existing system Strategies for recordkeeping Design, implement, and review the changes: Design the recordkeeping system Implement the recordkeeping system Post-implementation review An international Records Management standard These two standards, AS-4390 and DIRKS, have had a tremendous influence not only within Australia, but also internationally. In 2001, ISO-15489 was published as an international standard for best practices for Records Management. Part one of the standard was based on AS-4390, and part two was based on the guidelines, as laid out in DIRKS. The same eight-step methodology of DIRKS is used in the part two guidelines of ISO-15489. The DIRKS manual can be freely downloaded from the National Archives of Australia: http://www.naa.gov.au/recordsmanagement/publications/dirks-manual.aspx The ISO-15489 document can be purchased from ISO: http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=31908 and http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35845 ISO-15489 has been a success in terms of international acceptance. 148 countries are members of ISO, and many of the participating countries have embraced the use of ISO-15489. Some countries where ISO-15489 is actively applied include Australia, China, UK, France, Germany, Netherlands, and Jamaica. Both ARMA International and AIIM now also promote the importance of the ISO-15489 standard. Much of the appeal behind the ISO-15489 standard is the fact that it is fairly generic. Because it describes the recordkeeping process at a very high level, it avoids contentious details that may be specific to any particular Records Management implementation. Consider, for example, the eight steps of the DIRKS process, as listed above, and replace the words "record" and "recordkeeping" with the name of some other type of enterprise software or project, like "ERP". The steps and associated recommendations from DIRKS are equally applicable. In fact, we recognize clear parallels between the steps presented in the DIRKS methodology and methodologies used for Project Management. Later in this article, we will look at similarities between Records Management and Project Management methodologies like PMBOK and Agile. Does ISO-15489 overlap with standards like DoD-5015.2 and MoReq? ISO-15489 differs considerably in approach from other Records Management standards, like the DoD-5015.2 standard and the MoReq standard which developed in Europe. While ISO-15489 outlines basic principles of Records Management and describes best practices, these latter two standards are very prescriptive in terms of detailing the specifics for how to implement a Records Management system. They are essentially functional requirement documents for computer systems. MoReq (Model Requirements for the Management of Electronic Records) was initiated by the DLM Forum and funded by the European Commission. MoReq was first published in 2001 as MoReq1 and was then extensively updated and republished as MoReq2 in 2008. In 2010, an effort was undertaken to update the specification with the new name MoReq2010. The MoReq2 standard has been translated into 12 languages and is referenced frequently when building Records Management systems in Europe today. Other international standards for Records Management A number of other standards exist internationally. In Australia, for example, the Public Record Office has published a standard known as the Victorian Electronic Records Strategy (VERS) to address the problem of ensuring that electronic records can be preserved for long periods of time and still remain accessible and readable. The preliminary investigation Before we start getting our hands dirty with the sticky details of designing and implementing our records system, let's first get a big-picture idea of how Records Management currently fits into our organization and then define our vision for the future of Records Management in our organization. To do that, let's make a preliminary investigation of the records that our organization deals with. In the preliminary investigation, we'll make a survey of the records in our organization to find out how they are currently being handled. The results of the survey will provide important input into building the Business Case for moving forward with building a new Records Management system for our organization. With the results of the preliminary investigation, we will be able to create an information map or diagram of where records currently are within our organization and which groups of the organization those records are relevant to. With that information, we will be able to create a very high-level charter for the records program, provide data to be used when building the Business Case for Records Management, and then have sufficient information to be able to calculate a rough estimate of the cost and effort needed for the program scope. Before executing on the preliminary investigation, a detailed plan of attack for the investigation should be made. While the primary goal of the investigation is to gather information, a secondary goal should be to do it in a way that minimizes any disruptions to staff members. To perform the investigation, we will need assistance from the various business units in the organization. Before starting, a 'heads up' should be sent out to the managers of the different business units involved so that they will understand the nature of the investigation, when it will be carried out, and they'll know roughly the amount of time that both they and their unit will need to make available to assist in the investigation. It would also be useful to hold a briefing meeting with staff members from business units, where we expect to find most of the records. The records survey Central to the preliminary investigation is the records survey, which is taken across the organization. A records survey attempts to identify the location and record types for both the electronic and non-electronic records used in the organization. Physical surveys versus questionnaires The records survey is usually either carried out as a physical one or as one managed remotely via questionnaires. In a physical survey, members of the records management team visit each business unit, and working together with staff members from that unit, make a detailed inventory. During the survey, all physical storage locations, such as cabinets, closets, desks, and boxes are inspected. Staff members are asked where they store their files, which business applications they use, and which network drives they have access to. The alternative to the physical survey is to send questionnaires to each of the business units and to ask them to complete the forms on their own. Inspections similar to that of the physical survey would be made, but the business unit is not supported by a records management team member. Which of the two approaches we use will depend on the organization. Of course, a hybrid approach, where a combination of both physical surveys and questionnaires is used would work too. Physical in-person surveys tend to provide more accurate and complete inventories, but they also are typically more expensive and time consuming to perform. Questionnaires, while cheaper, rely on each of the individual business units to complete the information on their own, which means that the reporting and investigation styles used by the different units might not be uniform. There is also the problem that some business units may not be sufficiently motivated to complete the questionnaires in a timely manner. Preparing for the survey: Review existing documentation Before we begin the survey, we should check to see if there already exists any background documentation that describes how records are currently being handled within the organization. Documentation has a habit of getting out of date quickly. Documentation can also be deceiving because sometimes it is written, but never implemented, or implemented in ways that deviate dramatically from the originally written description. So if we're actually lucky enough to find any documentation, we'll need to also validate how accurate that information really is. These are some examples of documents which may already exist and which can provide clues about how some organizational records are being handled today: The organization's disaster recovery plan Previous records surveys or studies The organization's record management policy statement Internal and external audit reports that involve consideration of records Organizational reports like risk assessment and cost-benefit analyses Other types of documents may also exist, which can be good indicators for where records, particularly paper records, might be getting stored. These include: Blueprints, maps, and building plans that show the location of furniture and equipment Contracts with storage companies or organizations that provide records or backup services Equipment and supply inventories that may indicate computer hardware Lists of databases, enterprise application software, and shared drives It may take some footwork and digging to find out exactly where and how records in the organization are currently being stored. Physical records could be getting stored in numerous places throughout office and storage areas. Electronic records might be currently saved on shared drives, local desktops, or other document repositories. The main actions of the records survey can be summarized by the LEAD acronym: Locate the places where records are being stored Examine the records and their contents Ask questions about the records to understand their significance Document the information about the records
Read more
  • 0
  • 0
  • 2753

article-image-integrating-twitter-magento
Packt
14 Jan 2011
2 min read
Save for later

Integrating Twitter with Magento

Packt
14 Jan 2011
2 min read
Integrating your Magento website with Twitter is a useful way to stay connected with your customers. You'll need a Twitter account (or more specifically an account for your business)  but once that's in place it's actually pretty easy. Adding a 'Follow Us On Twitter' button to your Magento store One of the more simple ways to integrate your store's Twitter feed with Magento is to add a 'Follow Us On Twitter' button to your store's design. Generating the markup from the Twitter website Go to the Twitter Goodies website (): Select the Follow Buttons option and then select the Looking for Follow us on Twitter buttons? towards the bottom of the screen: The buttons will now change to the FOLLOW US ON Twitter buttons: Select the style of button you'd like to make use of on your Magento store and then select the generated HTML that is provided in the pop-up that is displayed: The generated HTML for the M2 Store's Twitter account (with the username of M2MagentoStore) looks like the following: <a href="http://www.twitter.com/M2MagentoStore"> <img src="http://twitter-badges.s3.amazonaws.com/follow_us-a.png" alt="Follow M2MagentoStore on Twitter"/> </a> Adding a static block in Magento for your Twitter button Now you will need to create a new static block in the Magento CMS feature: navigate to CMS Static Blocks| in your Magento store's administration panel and click on Add New Block. As you did when creating a static block for the supplier logos used in your store's footer, complete the form to create the new static block. Add the Follow Us On Twitter button to the Content field by disabling the Rich Text Editor with the Show/Hide Editor button and pasting in the markup you generated previously: You don't need to upload an image to your store through Magento's CMS here as the Twitter buttons are hosted elsewhere. Note that the Identifier field reads follow-twitter—you will need this for the layout changes you are about to make!
Read more
  • 0
  • 0
  • 3220

article-image-promoting-efficient-communication-moodle-curriculum-and-information-management-system-curriculum-and-information-management-system
Packt
11 Jan 2011
8 min read
Save for later

Promoting efficient communication with Moodle

Packt
11 Jan 2011
8 min read
A key component of any quality educational program is its ability to facilitate communication among all of the parties involved in the program. Communication and the subsequent relaying of information and knowledge between instructional faculty, administrators, students, and support personnel must be concise, efficient, and, when so desired, as transparent as possible. Using Moodle as a hub for internal information distribution, collaboration, and communication Moodle's ability to facilitate information flow and communication among users within the system, who are registered users such as students and teachers, is a capability that has been a core function of Moodle since its inception. The module most often used to facilitate communication and information flow is the forum and we will thus focus primarily on creative uses of forums for communication within an educational program. Facilitating intra- or inter-departmental or program communication, collaboration, and information flow Many educational programs comprise sub-units such as departments or programs. These units usually consist of students, teachers, and administrators who interact with one another at varying levels in terms of the type of communication, its frequency, and content. The following example will demonstrate how a sub-unit—the reading program within our language program example—might set up a communication system, using a meta course in Moodle, that accomplishes the following: Allows the program to disseminate information to all students, teachers, and administrators involved in the program. The system must, of course, allow for settings enabling dissemination to only selected groups or to the entre group, if so desired. Establishes a forum for communication between and among teachers, students, and administrators. Again, this system must be fine-tunable such that communication can be limited to specific parties within the program. The example will also demonstrate, indirectly, how a meta course could be set up to facilitate communication and collaboration between individuals from different programs or sub-units. In such a case, the meta course would function as an inter-departmental communication and collaboration system. Time for action – setting up the meta course To set up a communication system that can be finely tuned to allow specific groups of users to interact with each other, follow these steps: We are going to set up a communication system using a meta course. Log in to your site as admin and click on the Show all courses link found at the bottom of your MyCourses block on the front page of your site. At the bottom of the subsequent Course Categories screen, click on the Add a new course button. Change the category from Miscellaneous to Reading and enter a Full name and Short name such as Reading Program and ReadProg. Enter a short description explaining that the course is to function as a communication area for the reading program. Use the drop-down menu next to the meta course heading, shown in the following screenshot, to select Yes in order to make this course a meta course: Change the Start date as you see fit. You don't need to add an Enrollment key under the Availability heading to prevent users who are not eligible to enter the course because the enrollment for meta courses is taken from child courses. If you've gotten into the habit of entering enrollment keys just to be safe however, doing so here won't cause any problems. Change the group setting, found under the Groups heading, to Separate. Do not force this setting however, in order to allow it to be set on an individual activity basis. This will allow us to set up forums that are only accessible to teachers and/or administrators. Other forums can be set up to allow only student and teacher access, for example. Click on the Save changes button found at the bottom of the screen and on the next screen, which will be the Child courses screen, search for all reading courses by entering Reading in the search field. After clicking on the Search button to initiate the search, you will see all of the reading courses, including the meta course we have just created. Add all of the courses, except the meta course, as shown in the following screenshot. Use the short name link found in the breadcrumb path at the top-left of the window, shown in the following screenshot, to navigate to the course after you have added all of the reading child courses: What just happened? We just created a meta course and included all of the reading courses as child courses of the meta course. This means that all of the users enrolled in the reading child courses have been automatically enrolled in the meta course with the same roles that they have in the child courses. It should be noted here that enrollments in meta courses are controlled via the enrollments in each of the child courses. If you wish to unenroll a user from a meta course, he or she must be unenrolled from the respective child course. In the next step, we'll create the groups within the meta course that will allow us to create targeted forums. Time for action – creating a group inside the meta course We are now going to create groups within our meta course in order to allow us to specify which users will be allowed to participate in, and view, the forums we set up later. This will allow us to control which sets of users have access to the information and communication that will be contained in each forum. Follow these steps to set up the forums: Log in to your Moodle site as admin and navigate to the meta course we just created. It will be located under the Reading heading from the MyCourses block and titled Reading Program if you followed the steps outlined earlier in this article. Click on the Groups link found inside the Administration block. The subsequent screen will be titled ReadingProg Groups. The ReadingProg portion of the title is from the short name of our course. From this screen, click on the Create group button. Title the group Teachers and write a short description for the group. Ignore the enrollment key option as enrollments for meta courses are controlled by the child course enrollments. Leave the picture field blank unless you would like to designate a picture for this group. Click on the Save changes button to create the group. You will now see the ReadingProg Groups screen again and it will now contain the Teachers group, we just created. Click once on the group name to enable the Add/remove users button. Click on the Add/remove users button to open the Add/remove users window. From this window, enter the word Teacher in the search window and click on the Search button. Select all of the teachers by clicking once on the first teacher and then scrolling to the last teacher and, while holding down the shift button on your keyboard, click on the last teacher. This will highlight all of the teachers in the list. Click on the Add button to add the selected teachers to the Existing members list on the left. Click on the Back to groups button to return to the ReadingProg Groups screen. The Teachers group will now appear as Teachers(20) and, when selected, the list of teachers will appear in the Members of: list found on the right side of the screen, as shown in the following screenshot: Next, navigate to the front page of your site and from the Site Administration block, click on the Miscellaneous heading link and then on the Experimental link. Scroll down to the Enable groupings setting and click the tickbox to enable this setting. This setting enables you to group multiple groups together and also to make activities exclusively available to specific groupings. We'll need this capability when we set up the forums later. For a more detailed explanation of the groupings feature, visit the associated Moodle Docs page at: http://docs.moodle.org/en/Groupings. What just happened? We just created a group, within our Reading Program meta course, for all of the teachers enrolled in the course. Because the enrollments for a meta course are pulled from the child courses associated with a meta course, the teachers are all teachers who are teaching reading courses in our program. Later in this article, we'll see how we can use this group when we set up forums that we only want our teachers to have access to.
Read more
  • 0
  • 0
  • 1332
article-image-making-ajax-requests-yui
Packt
10 Jan 2011
5 min read
Save for later

Making Ajax Requests with YUI

Packt
10 Jan 2011
5 min read
In this Ajax tutorial, you will learn the YUI way of making Asynchronous JavaScript and XML (AJAX) requests. Although, all modern browsers support sending asynchronous requests to the server, not all browsers work the same way. Additionally, you are not required to return XML; your AJAX requests may return JSON, text, or some other format if you prefer. The Connection component provides a simple, cross-browser safe way to send and retrieve information from the server. How to make your first AJAX request This recipe will show you how to make a simple AJAX request using YUI. Getting ready To use the Connection component, you must include the YUI object, the Event component, and the core of the Connection component: <script src="pathToBuild/yahoo/yahoo-min.js" type="text/javascript"></script> <script src="pathToBuild/event/event-min.js" type="text/javascript"></script> <script src="pathToBuild/connection/connection_core-min.js" type="text/javascript"></script> If you plan on using the form serialization example, or other advanced features, you will need to include the whole component, instead of only the core features: <script src="pathToBuild/connection/connection-min.js" type="text/javascript"></script> How to do it... Make an asynchronous GET request: var url = "/myUrl.php?param1=asdf&param2=1234"; var myCallback = { success: function(o) {/* success handler code */}, failure: function(o) {/* failure handler code */}, /* ... */ }; var transaction = YAHOO.util.Connect.asyncRequest('GET', url, myCallback); Make an asynchronous POST request: var url = "/myUrl.php"; var params = "param1=asdf&param2=1234"; var myCallback = { success: function(o) {/* success handler code */}, failure: function(o) {/* failure handler code */}, /* ... */ }; var transaction = YAHOO.util.Connect.asyncRequest( 'POST', url, myCallback, params); Make an asynchronous POST request using a form element to generate the post data: var url = "/myUrl.php"; var myCallback = { success: function(o) {/* success handler code */}, failure: function(o) {/* failure handler code */}, /* ... */ }; YAHOO.util.Connect.setForm('myFormEelementId'); var transaction = YAHOO.util.Connect.asyncRequest('POST', url, myCallback); How it works... All modern browsers have supported AJAX natively since the early 2000. However, IE implemented a proprietary version using the ActiveXObject object , while other browsers implemented the standard compliant XMLHttpRequest (XHR) object . Each object has its own implementation and quirks, which YUI silently handles for you. Both objects make an HTTP request to the provided URL, passing any parameters you specified. The server should handle AJAX requests like any normal URL request. When making a GET request , the parameters should be added to the URL directly (as in the example above). When making a POST request, the parameters should be a serialized form string (&key=value pairs) and provided as the fourth argument. Connection Manager also allows you to provide the parameters for a GET request as the fourth argument, if you prefer. Using the setForm function attaches a form element for serialization with the next call to the asyncRequest function . The element must be a form element or it will throw an exception. YUI polls the browser XHR object until a response is detected, then it examines the response code and the response data to see if it is valid. If it is valid, the success event fires, and if it is not, the failure event fires. YUI wraps the XHR response with its own connection object, thereby masking browser variations, and passes the wrapper object as the first argument of all the AJAX callback functions. There's more... Beside POST and GET, you may also use PUT, HEAD, and DELETE requests, but these may not be supported by all browsers or servers. It is possible to send synchronous request through the native XHR objects, however Connection Manager does not support this. The asyncRequest function returns an object known as the transaction object . This is the same object that YUI uses internally to manage the XHR request. It has the following properties: See also Exploring the callback object properties recipe, to learn what properties you can set on the callback object. Exploring the response object recipe, to learn what properties are available on the YUI object passed into your callback functions. Exploring the callback object properties The third argument you can provide to the asyncRequest function defines your callback functions and other related response/request properties. This recipe explains what those properties are and how to use them. How to do it... The properties available on the callback object are: var callback = { argument: {/* ... */}, abort: function(o) {/* ... */}, cache: false, failure: function(o) {/* ... */}, scope: {/* ... */}, success: function(o) {/* ... */}, timeout: 10000, // 10 seconds upload: function(o) {/* ... */}, }; How it works... The various callback functions attached to the connection object use the CustomEvent.FLAT callback function signature. This way the response object is the first argument of the callback functions. Each of the callback functions is subscribed to the appropriate custom event by the asyncRequest function. When the Connection Manager component detects the corresponding event conditions, it fires the related custom event. The upload callback function is special because an iframe is used to make this request. Consequently, YUI cannot reasonably discern success or failure, nor can it determine the HTTP headers. This callback will be executed both when an upload is successful and when it fails, instead of the success and failure callback functions. The argument property is stored on the response object and passed through to the callback functions. You can set the argument to anything that evaluates as true. When the cache property is true, YUI maps the responses to the URLs, so if the same URL is requested a second time, Connection Manager can simply execute the proper callback function immediately. The timeout property uses the native browser setTimeout function to call the abort function when the timeout expires. The timeout is cleared when an AJAX response is detected for a transaction. See also Exploring the response object properties recipe, to learn what properties are available on the YUI object passed into your callback functions. Using event callback functions recipe, to learn common practices for handling failure and success callback functions.
Read more
  • 0
  • 0
  • 3700

article-image-moodle-cims-installing-and-using-bulk-course-upload-tool
Packt
07 Jan 2011
7 min read
Save for later

Moodle CIMS: Installing and Using the Bulk Course Upload Tool

Packt
07 Jan 2011
7 min read
Moodle as a Curriculum and Information Management System Use Moodle to manage and organize your administrative duties; monitor attendance records, manage student enrolment, record exam results, and much more Transform your Moodle site into a system that will allow you to manage information such as monitoring attendance records, managing the number of students enrolled in a particular course, and inter-department communication Create courses for all subjects in no time with the Bulk Course Creation tool Create accounts for hundreds of users swiftly and enroll them in courses at the same time using a CSV file. Part of Packt's Beginner's Guide series: Readers are walked through each task as they read the book with the end result being a sample CIMS Moodle site Using the Bulk Course Upload tool Rather than creating course categories and then courses one at a time and assigning teachers to each course after the course is created, we can streamline the process through the use of the Bulk Course Upload tool. This tool allows you to organize all the information required to create your courses in a CSV (Comma Separated Values) file that is then uploaded into the creation tool and used to create all of your courses at once. Due to its design, the Bulk Course Upload tool only works with MySQL databases. Our MAMP package uses a MySQL database as do the LAMP packages. If your Moodle site is running on a database of a different variety you will not be able to use this tool. Time for action – installing the Bulk Course Upload tool Now that we have our teacher's accounts created, we are ready to use the Bulk Course Creation tool to create all of our courses. First we need to install the tool as an add-on admin report into our Moodle site. To install this tool, do the following: Go to the Modules and plugins area of www.moodle.org. Search for Bulk Course Upload tool. Click on Download latest version to download the tool to your computer. If this does not download the package to your hard drive and instead takes you to a forum in the Using Moodle course on Moodle.org, download the package that was posted in that forum on Sunday, 11 May 2008. Expand the package, contained within, and find the uploadcourse.php file. Place the uploadcourse.php file in your admin directory located inside your main Moodle directory. When logged in as admin, enter the following address in your browser address bar: http://localhost:8888/moodle19/admin/uploadcourse.php. (If you are not using a MAMP package, the first part of the address will of course be different.) You will then see the Upload Course tool explanation screen that looks like the following screenshot: The screen, shown in the previous screenshot, lists the thirty-nine different fields that can be included in a CSV file when creating courses in bulk via this tool. Most of the fields here control settings that are modified in individual courses by clicking on the Settings link found in the Administration block of each course. The following is an explanation of the fields with notes about which ones are especially useful when setting up Moodle as a CIMS: category: You will definitely want to specify categories in order to organize your courses. The best way to organize courses and categories here is such that the organization coincides with the organization of your curriculum as displayed in school documentation and student handbooks. If you already have categories in your Moodle site, make sure that you spell the categories exactly as they appear on your site, including capitalization. A mistake will result in the creation of a new category. This field should start with a forward slash followed by the category name with each subcategory also being followed by a forward slash (for example, /Listening/Advanced). cost: If students must pay to enroll in your courses, via the PayPal plugin, you may enter the cost here. You must have the PayPal plugin activated on your site, which can be done by accessing it via the Site Administration block by clicking on Courses and then Enrolments. Additionally, as this book goes to print, the ability to enter a field in the file used by the Bulk Course tool that allows you to set the enrolment plugin, is not yet available. Therefore, if you enter a cost value for a course, it will not be shown until the enrolment plugin for the course is changed manually by navigating to the course and editing the course through the Settings link found in the course Administration block. Check Moodle.org frequently for updates to the Bulk Course Upload tool as the feature should be added soon. enrolperiod: This controls the amount of time a student is enrolled in a course. The value must be entered in seconds so, for example, if you had a course that ran for one month and students were to be unenrolled after that period, you would set this value to 2,592,000 (60 seconds X 60 minutes per hour X 24 hours per day X 30 = 2,592,000). enrollable: This simply controls whether the course is enrollable or not. Entering a 0 will render the course unenrollable and a 1 will set the course to allow enrollments. enrolstartdate and enrolenddate: If you wish to set an enrollment period, you should enter the dates (start and end dates) in these two fields. The dates can be entered in the month/day/year format (for example, 8/1/10). expirynotify: Enter a 1 here to have e-mails sent to the teacher when a student is going to be unenrolled from a course. Enter a 0 to prevent e-mails from being sent when a student is going to be unenrolled. This setting is only functional when the enrolperiod value is set. expirythreshold: Enter the number of days in advance you want e-mails notifying of student unenrollment sent. The explanation file included calls for a value between 10 and 30 days but this value can actually be set to between 1 and 30 days. This setting is only functional when the enrolperiod value and expirynotify and/or notifystudents (see below) is/are set. format: This field controls the format of the course. As of Moodle 1.9.8+ there are six format options included in the standard package. The options are lams, scorm, social, topics, weeks, and weeks CSS, and any of these values can be entered in this field. fullname: This is the full name of the course you are creating (for example, History 101). groupmode: Set this to 0 for no groups, 1 for separate groups, and 2 for visible groups. groupmodeforce: Set this to 1 to force group mode at the course level and 0 to allow group mode to be set in each individual activity. guest: Use a 0 to prevent guests from accessing this course, a 1 to allow uests in the course, and a 2 to allow only guests who have the key into the course. idnumber: You can enter a course ID number using this field. This number is only used for administrative purposes and is not visible to students. This is a very useful field for institutions that use identification numbers for courses and can provide a link for connecting the courses within Moodle to other systems. If your institution uses any such numbering system it is recommended that you enter the appropriate numbers here. lang: This is the language setting for the course. Leaving this field blank will result in the Do not force language setting, which can be seen from the Settings menu accessed from within each individual course. Doing so will allow users to toggle between languages that have been installed in the site. To specify a language, and thus force the display of the course using this language, enter the language as it is displayed within the Moodle lang directory (for example, English = en_utf8). maxbytes: This field allows you to set the maximum size of individual files that are uploaded to the course. Leaving this blank will result in the course being created with the site wide maximum file upload size setting. Values must be entered in bytes (for example, 1 MB = 1,048,576 bytes). Refer to an online conversion site such as www.onlineconversion.com to help you determine the value you want to enter here. metacourse: If the course you are creating is a meta course, enter a 1, otherwise enter a 0 or leave the field blank.
Read more
  • 0
  • 0
  • 2496

article-image-django-javascript-integration-jquery-place-editing-using-ajax
Packt
07 Jan 2011
8 min read
Save for later

Django JavaScript Integration: jQuery In-place Editing Using Ajax

Packt
07 Jan 2011
8 min read
Django JavaScript Integration: AJAX and jQuery Develop AJAX applications using Django and jQuery Learn how Django + jQuery = AJAX Integrate your AJAX application with Django on the server side and jQuery on the client side Learn how to handle AJAX requests with jQuery Compare the pros and cons of client-side search with JavaScript and initializing a search on the server side via AJAX Handle login and authentication via Django-based AJAX This will allow us to create a results page as shown: When someone clicks OK, the data is saved on the server, and also shown on the page. Let's get started on how this works. Including a plugin We include a jQuery plugin on a page by including jQuery, then including the plugin (or plugins, if we have more than one). In our base.html, we update: {% block footer_javascript_site %} <script language="JavaScript" type="text/javascript" src="/static/js/jquery.js"></script> <script language="JavaScript" type="text/javascript" src="/static/js/jquery-ui.js"></script> <script language="JavaScript" type="text/javascript" src="/static/js/jquery.jeditable.js"></script> {% endblock footer_javascript_site %} This is followed by the footer_javascript_section and footer_javascript_page blocks. This means that if we don't want the plugin ,which is the last inclusion, to be downloaded for each page, we could put it in overridden section and page blocks. This would render as including the plugin after jQuery. How to make pages more responsive We would also note that the setup, with three JavaScript downloads, is appropriate for development purposes but not for deployment. In terms of YSlow client-side performance optimization, the recommended best practice is to have one HTML/XHTML hit, one CSS hit at the top, and one JavaScript hit at the bottom. One of the basic principles of client-side optimization, discussed by Steve Souders (see http://developer.yahoo.com/yslow/) is,since HTTP requests slow the page down, the recommended best practice is to have one (preferably minifed) CSS inclusion at the top of the page, and then one (preferably minifed) JavaScript inclusion at the bottom of each page. Each HTTP request beyond this makes things slower, so combining CSS and/or JavaScript requests into a single concatenated file is low-hanging fruit to improve how quick and responsive your web pages appear to users. For deployment, we should minify and combine the JavaScript. As we are developing, we also have JavaScript included in templates and rendered into the delivered XHTML; this may be appropriate for development purposes. For deployment though, as much shared functionality as possible should be factored out into an included JavaScript fle. For content that can be delivered statically, such as CSS, JavaScript, and even non-dynamic images, setting far-future Expires/Cache-Control headers is desirable. (One practice is to never change the content of a published URL for the kind of content that has a far-future expiration set, and then if it needs updating, instead of changing the content at the same location, leave the content where it is, publish at a new location possibly including a version number, and reference the new location.) A template handling the client-side requirements Here's the template. Its view will render it with an entity and other information. At present it extends the base directly; it is desirable in many cases to have the templates that are rendered extend section templates, which in turn extend the base. In our simple application, we have two templates which are directly rendered to web pages. One is the page that handles both search and search results—and the other, the page that handles a profile, from the following template: {% extends "base.html" %} We include honorifics before the name, and post-nominals after. At this point we do not do anything to make it editable. {% extends "base.html" %} Following earlier discussion, we include honorifcs before the name, and post-nominals after. At this point we do not do anything to make it editable. {% block head_title %} {{ entity.honorifics }} {{ entity.name }} {{ entity.post_nominals }} {% endblock head_title %} {% block body_main %} There is one important point about Django and the title block. The Django developers do not fnd it acceptable to write a templating engine that produces errors in production if someone attempts to access an undefned value (by typos, for instance). As a result of this design decision, if you attempt to access an undefned value, the templating engine will silently insert an empty string and move on. This means that it is safe to include a value that may or may not exist, although there are ways to test if a value exists and is nonempty, and display another default value in that case. We will see how to do this soon. Let's move on to the main block, defned by the last line of code. Once we are in the main block, we have an h1 which is almost identical to the title block, but this time it is marked up to support editing in place. Let us look at the honorifics span; the name and post_nominals spans work the same way: <h1> <span id="Entity_honorifics_{{ entity.id }}" class="edit"> {% if entity.honorifics %} {{ entity.honorifics }} {% else %} Click to edit. {% endif %} </span> The class edit is used to give all $(".edit") items some basic special treatment with Jeditable; there is nothing magical about the class name, which could have been replaced by user-may-change-this or something else. edit merely happens to be a good name choice, like almost any good variable/function/object name. We create a naming convention in the span's HTML ID which will enable the server side to know which—of a long and possibly open-ended number of things we could intend to change—is the one we want. In a nutshell, the convention is modelname_feldname_instanceID. The frst token is the model name, and is everything up to the first underscore. (Even if we were only interested in one model now, it is more future proof to design so that we can accommodate changes that introduce more models.) The last token is the instance ID, an integer. The middle token, which may contain underscores (for example post_nominals in the following code), is the feld name. There is no specifc requirement to follow a naming convention, but it allows us to specify an HTML ID that the server-side view can parse for information about which feld on which instance of which model is being edited. We also provide a default value, in this case Click to edit, intended not only to serve as a placeholder, but to give users a sense on how this information can be updated. We might also observe that here and in the following code, we do not presently have checks against race conditions in place. So nothing here or in the following code will stop users from overwriting each others' changes. This may be taken as a challenge to refne and extend the solution to either prevent race conditions or mitigate their damage. <span id="Entity_name_{{ entity.id }}" class="edit"> {% if entity.name %} {{ entity.name }} {% else %} Click to edit. {% endif %} </span> <span id="Entity_post_nominals_{{ entity.id }}" class="edit"> {% if entity.post_nominals %} {{ entity.post_nominals }} {% else %} Click to edit. {% endif %} </span> </h1> This approach is an excellent frst approach but in practice is an h1 with three slots that say Click to edit on a profle, creating needless confusion. We move to a simplifed: <span id="Entity_name_{{ entity.id }}" class="edit"> {% if entity.name %} {{ entity.name }}jQuery In-place Editing Using Ajax {% else %} Click to edit. {% endif %} </span> <span id="Entity_post_nominals_{{ entity.id }}" class="edit"> {% if entity.post_nominals %} {{ entity.post_nominals }} {% else %} Click to edit. {% endif %} </span> </h1> Taken together, the three statements form the heading in this screenshot: If we click on the name (for instance) it becomes: The image is presently a placeholder; this should be expanded to allow an image to be uploaded if the user clicks on the picture (implementing consistent-feeling behavior whether or not we do so via the same plugin). We also need the view and urlpattern on the backend: <h1 class="edit" id="Entity_name_{{ entity.id }}"> {{ entity.name }} </h1>
Read more
  • 0
  • 0
  • 2151
article-image-building-moodle-cims-foundation-creating-categories-and-courses
Packt
07 Jan 2011
8 min read
Save for later

Building the Moodle CIMS Foundation: Creating Categories and Courses

Packt
07 Jan 2011
8 min read
  Moodle as a Curriculum and Information Management System   Use Moodle to manage and organize your administrative duties; monitor attendance records, manage student enrolment, record exam results, and much more Transform your Moodle site into a system that will allow you to manage information such as monitoring attendance records, managing the number of students enrolled for a particular course, and inter-department communication Create courses for all subjects in no time with the Bulk Course Creation tool Create accounts for hundreds of users swiftly and enroll them in courses at the same time using a CSV file. Part of Packt's Beginner's Guide series: Readers are walked through each task as they read through the book with the end result being a sample CIMS Moodle site         Read more about this book       (For more resources on Moodle, see here.) Course categories Categorization is an innate human behavior that allows us to perceive and understand the environment that surrounds us. Moodle designers must have recognized our tendency to categorize, because Moodle contains a flexible categorization system that allows for the creation of categories in which you may house additional categories and courses. Any educational program that offers courses of various varieties will invariably be using a categorization system like this for grouping courses into specific categories. A language program, for example, might group courses into skill-specific categories such as those of listening, speaking, reading, and writing. A larger entity, such as a college, would likely group courses into content-specific categories such as literature, sciences, speech communications, and the like, with additional subcategories used inside each of those main categories. No matter what the categorization system, Moodle is well-equipped to accommodate via its intuitive user-friendly course category creation interface. Manual creation of course categories We will quickly walk through the manual creation of a simple categorization system in the next few pages. It should be noted however, that course categories can be created automatically via the use of the Bulk Course Upload tool that will be introduced later in the next article. While the automated creation process is certainly a more efficient one, it is a good idea to understand how to create, edit, and adjust categories manually as the need to make adjustments may arise after categories have been created automatically, and at that point, the only practical method may be via the manual process. Using the language program sample as an example, we will set up a categorization system that uses the traditional language skills (listening, speaking, reading, and writing) as the highest level in the categorization system with subcategories for levels. In our example, our program will have four levels: Advanced, Intermediate, Beginner, and Basic, so we will set up each skill category such that it contains subcategories that coincide with the four levels. Time for action – manually creating course categories Let's get started by first taking a look at the courses and categories that exist in the default installation of our MAMP package. We'll proceed by manually creating the categories and subcategories we need for our language program example. Log in to your Moodle site as admin, or as a user with administrative permissions, and click on the All courses link found at the bottom of the Course categories block from your front page. An alternative method for accessing the Course category window is to simply type the word 'course' into your browser at the end of your website address from the front page of your Moodle site. This will direct your browser to the default file, index.php, located in the course directory (for example, for the XAMPP package, it will look like this http://localhost/moodle19/course). The following screenshot is of a default MAMP installation. For Windows XAMPP installations, no courses or categories will exist. You will see the two default courses that are created in the MAMP package and no category. As shown in the following screenshot, the full name of the course will appear on the left side of the screen with a small icon of a person, below it. The icon, shown with an arrow pointing to it in the following screenshot, signifies that the course is set to allow guest users to access it. On the right side of the screen is the course summary. Click on the Turn editing on button from the All courses screen, shown in the previous screenshot, to reveal the course category as shown in the next screenshot. This editing screen displays the categories and the number of courses contained in each category. The category was not listed in the course view window in the previous screenshot because there is currently only one category. With editing on, now click on the Add new category button and, on the subsequent screen, type in the desired category title. For this example, we are going to enter the four skills mentioned previously. Also, as we want these to be our four main categories, we will set the Parent category to Top. Enter a category description and click on the Create category button to finish the process. The following screenshot shows our setup prior to creating the category: After clicking on the Create category button, the screen that you will see next will be an editing screen that will allow you to edit from within the Listening category you just created. As a result, you will not see the Add new category button. Instead, you will see an Add a sub-category button. Click on this button to access the screen that allows you to create a new category. After doing so, you will simply need to change the Parent category to Top. Repeat this process until you have created all of your top-level categories. After you have created all categories, turn the editing feature off and click on the Course categories breadcrumb link, found at the top-left of the screen, to see the result. It will look like the following screenshot: If you wish to change the order in which the categories appear, you can turn editing back on and use the up and down arrows to move categories. In the following screenshot, which is the same screen as the previous one, with editing turned on, we have moved the Miscellaneous category to the bottom and rearranged the main categories into a different order. Next, we will create the four level categories using the same process explained for the main categories. The only difference is that we will create each of the four levels inside the main categories by designating the main category as the Parent category. From the editing screen shown in the previous screenshot, click on one of the categories and then on the subsequent Add a sub-category button, as shown in the following screenshot. Creating the category in this fashion will result in the parent category being automatically set to the main category to which you are adding the sub-category. In the same fashion as earlier when we created multiple categories in succession however, after adding the first sub-category, if you click on Add a sub-category again, you will need to then adjust the Parent category. If you do not do so, you will be effectively burying sub-categories within sub-categories. The alternative is to click on the Course categories pull-down menu prior to clicking on Add a sub-category. Create all four levels, Advanced, Intermediate, Beginner, and Basic, using this process, for each of the four skills (Listening, Reading, Speaking, and Writing). When you have finished adding all of the subcategories to the main categories and have returned to the main Course Categories window, your screen should look like the following screenshot: What just happened? You have just created a simple categorization system with four main skills (Listening, Speaking, Reading, and Writing). Next you created four subcategories—levels, inside each of the main categories (Advanced, Intermediate, Beginner, and Basic). As you followed the example used here or maybe created an even more intricate categorization scheme, you may have felt that the process was a bit time consuming and required quite a few mouse clicks. As mentioned in the beginning of this explanation, creating categories via the Bulk Course Upload tool is much more efficient and recommended when possible. There will be times however, when you need to create new categories after courses have already been made or to edit or rearrange categories. On these occasions, you may find it necessary to use the manual procedure so it is a good idea to be familiar with the process.
Read more
  • 0
  • 0
  • 1157

article-image-drupal-intranets-open-atrium-creating-dashboard
Packt
05 Jan 2011
7 min read
Save for later

Drupal Intranets with Open Atrium: Creating Dashboard

Packt
05 Jan 2011
7 min read
Drupal Intranets with Open Atrium Discover an intranet solution for your organization with Open Atrium Unlock the features of Open Atrium to set up an intranet to improve communication and workflow Explore the many features of Open Atrium and how you can utilize them in your intranet Learn how to support, maintain, and administer your intranet A how-to guide written for non-developers to learn how to set up and use Open Atrium   Main dashboard The main dashboard provides an interface for managing and monitoring our Open Atrium installation. This dashboard provides a central place to monitor what's going on across our departments. It will also be used as the central gateway for most of our administrative tasks. From this screen we can add groups, invite users, and customize group dashboards. Each individual who logs in also has the main dashboard and can quickly glance at the overall activity for their company. The dashboard is laid out initially by default in a two column layout. The left side of the screen contains the Main Content section and the right side of the screen contains a Sidebar. In a default installation of Open Atrium, there will be a welcome video in the Main Content area on the left. The first thing that you will notice when you log in is that there is a quick video that you can play on your main dashboard screen. This video provides a quick overview of Open Atrium for our users, and a review of the options you have for working with the dashboard. In the following screenshot, you will see the main dashboard and how the two separate content areas are divided, with a specific section marked that we will discuss later in the article: Each dashboard can be customized to either a two-column or split layout, as shown in the preceding screenshot, or a three-column layout. Under the Modifying Layout section of this article, we will cover how to change the overall layout. As you can see in the preceding image, the dashboard is divided into three distinct sections. There is the header area which includes the Navigation tabs for creating content, modifying settings, and searching the site. Under the header area, we have the main content and sidebar areas. These areas are made up of blocks of content from the site. These blocks can bring forward and include different items depending on how we customize our site. For example, in the left column we could choose to display Recent Activity and Blog Posts, while the right column could show Upcoming Events and a Calendar. Any of the features that we find throughout Open Atrium can be brought forward to a dashboard page. The beauty of this setup is that each group can customize their own. In the next section of this article, we will cover group dashboards in more detail. However, the same basic concepts will apply to all the dashboards. After our users are comfortable with using Open Atrium, we may decide that we no longer need to show the tutorial video on the main dashboard. This video can be easily removed by clicking on the Customizing the dashboard link just above the Recent Activity block or by clicking on the Customize dashboard link on the top right in the header section. Click on the customizing the dashboard link and we will see a dashboard widget on the screen. This will be the main interface for configuring layout and content on our dashboard. Now, hover over the video and on the top right you will see two icons. The first icon that looks like a plus sign (+) indicates that the content can be dragged. We can click on this icon when hovering over a section of content and move that content to another column or below another section of content on our dashboard. The X indicates that we can remove that item from our dashboard. Hovering over any piece of content when you are customizing the dashboard should reveal these two icons. The two icons are highlighted in the following screenshot with a square box drawn around them: To remove the welcome video, we click on the red X and then click on Save changes and the video tutorial will be removed from the dashboard. Group dashboard The group dashboard works the same as the main dashboard. The only difference is that the group dashboard exposes content for the individual departments or groups that are setup on our site. For example, a site could have a separate group for the Human Resources, Accounting, and Management departments. Each of these groups can create a group dashboard that can be customized by any of the administrators for a particular group. The following screenshot shows how the Human Resources department has customized their group dashboard:   In the preceding screenshot, we can see how the HR department customized their dashboard. In the left column they have added a Projects and a Blog section. The Projects section links to specific projects within the site, and the Blog section links to the detailed blog entries. There is also a customized block in the right column where the HR department has added the Upcoming events, a Mini calendar, and a Recent activity block. The Projects section is a block that is provided by the system and exposes content from the Case tracker or Todo sections of the HR website. The Upcoming events section is a customized block that highlights future events entered through the calendar feature. To demonstrate how each department can have a different dashboard, the following screenshot shows the dashboard for the Accounting department: The Accounting dashboard has been configured to show a custom block as the first item in the left column, and below that a listing of Upcoming events. In the right column, the Accounting administrator has added a block which brings forward the Latest cases of all the latest cases, exposing the most recent issues entered into the tracking system. It is also worth noting that the Accounting department has a completely different color scheme from the Human Resources department. The color scheme can be changed by clicking on Settings | Group Settings | Features. We can scroll down to the bottom of the screen and click on BACKGROUND to either enter a hexadecimal color for our main color or pick a color from the color wheel as displayed in the following screenshot: Spaces Spaces is a Drupal API module that allows sitewide configurable options to be overridden by individual spaces. The spaces API is included with our Open Atrium installation and provides the foundation for creating group and user configurable dashboards. Users can then customize their space and set settings that are only applied to their space. This shows the extreme power and flexibility of Open Atrium by allowing users to apply customizations without affecting any of the other areas of Open Atrium. Users can use the functionality provided by spaces to create an individualized home page. Group spaces Group spaces provide an area for each group or department to arrange content in a contextual manner that makes sense for each group. In the preceding examples, the content that is important to the accounting department is not necessarily important to the human resources department. Administrators of each department can take advantage of Open Atrium's complete flexibility to arrange content in a way that works for them. The URLs in the example that we have been looking at are listed as follows: Human Resources: http://acme.alphageekdev.com/hr. Accounting: http://acme.alphageekdev.com/acct Each URL is composed of the site URL, that is, http://acme.alphageekdev.com/ and then the short name that we provided for our group space, hr and acct. User spaces User spaces work in the same way that the group dashboard and spaces work. Each user of the system can customize their dashboard any way that they see appropriate. The following screenshot shows an example of the user's dashboard for the admin account: In the preceding screenshot, we have drawn a box around two areas. These two areas represent two different group spaces showing on the user's dashboard page. This shows how content can be brought forward to various dashboards to show only what is important to a particular user.
Read more
  • 0
  • 0
  • 2041