Friday, December 24, 2010

Enumerate reasons why we use technology today. What are some points that have influenced us? What are the factors involved in technology change?

Technology is the set of tools both hardware (physical) and software (algorithms, philosophical systems, or procedures) that help us act and think better. Technology includes all the objects from a basic pencil and paper to the latest electronic gadget. Electronic and computer technology help, use and share information and knowledge quickly and efficiently. What was previously slow and tedious is now easier and more realistic. Any tool has the potential to remove the tedium and repetition that will allow us to perform that which is most human-- thinking, dreaming, and planning.

We use technology for the following reasons: (1) help us act and think better; (2) quick and efficient information dissemination; (3) ease of work; (4) profit; (5) faster transactions; etc

The points that have influenced us in using technology are: (1) curiosity; (2) today’s trend; (3) making life easier and better; etc

There are also factors that are involved in the change of technology and some of those are the following: (1) new ideas; (2) creativity; (3) uncontentment ; (4) people’s demands and needs; etc.

Thursday, September 2, 2010

SAD 2 - Assignment 4

Contrast and discuss the enrollment input form (PRF) with the enrollment university interface.















When you try to look the Pre- Registration Form (PRF), it seems that it has many differences with the interface of the university's enrollment system. There are entries or fields on the system that is not present on the PRF that a student must fill-up. And some of those entries are the following:
  • Civil Status
  • Birthdate
  • Contact Number
  • E-mail Address
  • Religion
  • Desired Career
  • Year
  • Parents
If the entries above are not present on the PRF, then what would be the values to be inputted on that fields? If a student is an old type of student, then maybe those fields have already values but what if he/she is a new type of student? What should be those fields have?

Aside from those missing fields that I have observed, there is also a big difference on the positions of the different fields that the PRF and the enrollment system interface have. If I would be the encoder, and assuming that it would be my first time to operate the system, it would be difficult for me to input the values found on the PRF to the system since I have to look to the PRF and then find the correct field on the system where I have to enter the values. It would be time-consuming and there is a possibility of wrong inputs for those fields.

I also found out that the PRF has something to do with the scholarship a student has but in the system, there is no field for the scholarship. Aside from that, the employment information is also not present on the enrollment system interface.

My suggestions would be: (a) the look and feel of the PRF should be close to the look and feel of the university's enrollment system interface so that it would just be easy for the encoder to encode the necessary information found in the PRF into the system; (b) some entries that are present on the system and are not on the PRF should be removed.

Friday, July 30, 2010

SAD 2 - Assignment 3

Interview your university network specialist. Ask how various parts of the system communicates with each other throughout the university. (Q) Given the chance to redesign the existing setup, enumerate and discuss your keypoints for an effective and efficient network environment ideal for the university.


1. What are the components involved in the system(s) in the university? (hardware, software, technology, etc.)

I am not in the right position to discuss the details of the software components used as there are other assigned personnel for such job. However, talking about hardware component and technology used, basically I, assigned as the network administrator, is entrusted to maintain our different servers to run 24/7. Currently, we have our Web Server hosted here in our University in our HP ProLiant ML350 Server. Its an old but stable server set-up here in our Networks Office and has been active since Engr. Val A. Quimno , not yet a dean, was appointed as the Network Administrator. The said server has the following specification:

* Intel Xeon 3.0 GHz, 3.2 GHz, or 3.4 GHz processors (dual processor capability) with 1MB level 2 cache standard. Processors include support for Hyper-Threading and Extended Memory 64 Technology (EM64T)

* Intel® E7520 chipset

* 800-MHz Front Side Bus

* Integrated Dual Channel Ultra320 SCSI Adapter

* Smart Array 641 Controller (standard in Array Models only)

* NC7761 PCI Gigabit NIC (embedded)

* Up to 1 GB of PC2700 DDR SDRAM with Advanced ECC capabilities (Expandable to 8 GB)

* Six expansion slots: one 64-bit/133-MHz PCI-X, two 64-bit/100-MHz PCI-X, one 64-bit/66-MHz PCI-X, one x4 PCI-Express, and one x8 PCI-Express

* New HP Power Regulator for ProLiant delivering server level, policy based power management with industry leading energy efficiency and savings on system power and cooling costs

* Three USB ports: 1 front, 1 internal, 1 rear

* Support for Ultra320 SCSI hard drives (six hot plug or four non-hot plug drives supported standard, model dependent)

* Internal storage capacity of up to 1.8TB; 2.4TB with optional 2-bay hot plug SCSI drive

* 725W Hot-Plug Power Supply (standard, most models); optional 725W Hot-Pluggable Redundant Power Supply (1+1) available. Non hot plug SCSI models include a 460W non-hot plug power supply.

* Tool-free chassis entry and component access

* Support for ROM based setup utility (RBSU) and redundant ROM

* Systems Insight Manager, SmartStart, and Automatic Server Recovery 2 (ASR-2) included

* Protected by HP Services and a worldwide network of resellers and service providers. Three-year Next Business Day, on-site limited global warranty. Certain restrictions and exclusions apply. Pre-Failure Notification on processors, memory, and SCSI hard drives.

Aside from it, our mail server running under Compaq Proliant ML330 Server, our oldest server, is also hosted here in our Networks Office. Together with other Servers, such as Proxy and Enrollment Servers, both proxy and our enrollment servers are running in a microcomputer/personal computers but with higher specifications to act as servers.

2. How do these communicate with one another? (topology, network connectivity, protocols, etc.) – may include data flow/ UML diagrams to better explain.

All Servers are connected in a shared medium grouped as one subnetwork. In general, our network follows the extended star topology which is connected to a DUAL WAN Router that serves as the load balancer between our two Internet Service Providers. All other workstations are grouped into different subnetworks as in star topology branching out from our servers subnetwork as in extended star topology. At present, we are making use of class C IP Address for private IP address assignments. Other workstations IP assignments are configured statically (example: laboratories) while others are Dynamic (example: offices). All workstations are connected via our proxy servers that do some basic filtering/firewall to control users access to the internet aside from router filtering/firewall management. So, whenever any workstation has to connect to the internet, it has to pass through software and hardware based firewall.

3. What are the processes involved in the communication (each system to other systems)?

As mentioned above, in item 3, all workstations are connected via a proxy server. It means that whenever a workstation is turned on, it requests for an IP address from the proxy server (for dynamically configured IP address) and connect to the network after IP address is acquired. As connection is established, each system can now communicate and share resources within the same subnetwork and to server following the concepts discuss in your Computer Network Class.

4. How do you go along with the maintenance of the system?

Basically, our servers are expected to be in good condition since it is required to be up 24/7. Daily, during my vacant period, monitoring on the servers are observed that includes checking logs, checking hardware performance such as CPU health, etc. If problems are observed, remedies are then and then applied. Once in a week, regular overall checkup is observed as preventive maintenance to ensure not to experience longer downtime if possible.

5. Does the system follow a specific standard? Explain Please.

As I was appointed as the Network Administrator, everything was already in place except for some minor changes. Basically, different networking standards was already observed such as cabling standards, TIA/EIA 568A-B, different IEEE standards as discussed in your Computer Networks Subject, etc.

6. How is the security of the system? Are there any vulnerabilities? Risks? Corresponding mitigation techniques? Access control?

As I have mentioned, we have implemented both software and hardware based filtering/firewall. Basically, Risks or vulnerabilities and different mitigation techniques were considered to increase security in our network. Aside from filtering/firewall, constant monitoring on networks activity also increases the security of the system.

7. Are there any interference? During what (most) times do these occur? Explain their effects especially with regards to the business of the university?

Major Interferences are normally encountered as an effect of unforeseen and beyond our control events such as black outs, and the like. The said interference would of course affect University’s day-to-day businesses for obviously this will paralyze all our activities that rely on electricity and further this might cause damage on our network devices, etc. that may later be the reason for longer downtime. Problems encountered by our providers such as connection to the National/International Gateway also affect University’s business such as correlating to University’s Business Partners outside and within the country.

Thursday, July 22, 2010

SAD 2 - Assignment 2

Relative to your answer in Assignment 1 .... what's your take on the design of the enrollment system?

All of the observations, comments and suggestions about the new enrollment system implemented this year are already posted in my Assignment 1. And based on it, I would just summarize my take on the design of the newly implemented enrollment system.

The enrollment period plays a big role on a university since it is the time that students would be registering as a part of the institution. As my part as a student, the enrollment procedure of the university should have a process that is easy for us to adopt, understand and follow. It is my fourth year of staying here in the university and I observed that for that past seven enrollment periods (2 times for every year) that I have experienced, it was not that easy to adopt and follow. The process is always changing and I know for the fact that it is for the betterment of the process since lots of students and other people involve in the enrollment process have many complaints about it and by that, we all want change. However, changing the enrollment process can make the people involve confused. Fortunately, diagrams were posted on the different areas on the university every enrollment. Those diagrams show the process for the enrollment and serve as a guide for the students, especially for the freshmen students, since they are new comers to the university.

The following are the necessary changes that should be implemented for the improvement of the new enrollment system of our university.

  • the first step, which is the students accounts part, can just be removed since the processes involve on it, which are checking students accounts and checking balance, are already involve on the clearance signing part that is usually done weeks before the end of a semester.
  • on the third step, the payment of the other fees (e.g. local council, OCSC, headlight, insurance) should be done prior to the presenting enrollment requirements for the advising/pre-registration part since receipts for the other fees should be included on the advising part for the adviser to be able to give a pre-registration form to the student.
  • on the sixth step, which is the last step on the diagram, should not serve as the last step since we all knew that we have to go to the library after having the official Certificate of Registration (COR).


Thursday, July 8, 2010

SAD 2 - Assignment 1

Assuming you were tapped by the university president to evaluate the new enrollment system implemented this semester, enumerate your observations/comments and suggest possible areas and ways where improvements can be made. Your observations/suggestions should be properly validated with facts and literature.


The enrollment period plays a big role on a university since it is the time that students would be registering as a part of the institution. As my part as a student, the enrollment procedure of the university should have a process that is easy for us to adopt, understand and follow. It is my fourth year of staying here in the university and I observed that for that past seven enrollment periods (2 times for every year) that I have experienced, it was not that easy to adopt and follow. The process is always changing and I know for the fact that it is for the betterment of the process since lots of students and other people involve in the enrollment process have many complaints about it and by that, we all want change. However, changing the enrollment process can make the people involve confused. Fortunately, diagrams were posted on the different areas on the university every enrollment. Those diagrams show the process for the enrollment and serve as a guide for the students, especially for the freshmen students, since they are new comers to the university.














As a student, I can really say that posting those diagrams can be a big help for us not to be confused and to know what processes should be taken for the enrollment. And this assignment that we will be answering in this forum is related to that enrollment process that can be found on those diagrams. Some issues were taken on considerations for the improvement of those diagrams. But those issues would only focus on the enrollment process of the freshmen students since the old students were already used to the different processes on the university. And for the shiftees, since they have the same enrollment process with the new students, I have just generalized the enrollment process. There are some observations, comments, and suggestions that I have to share to everyone and those are the following:


In terms of the images:














1. I have observed that the picture of the student on the beginning part of the enrollment process is different on the image of the student when he/she finishes the enrollment or is now officially enrolled. It should be taken into consideration since on the reality; a student who would be enrolling would not become big or will not enlarge his/her body size on the time that he/she has finished the enrollment procedure. Maybe, it was used to emphasize that a student is now officially enrolled but it is really a little bit funny if applied into reality. If that is the case, maybe by now I am already fat (how I wish! LOL ).













2. On the long arrow pointing below that can be found after the step four of the enrollment process. For me, it can really make students confused since that arrow is pointing down and it would make us think that the next step to be followed can be found below the previous step. But as you can see on the diagram, it is not, is it?
In terms of the process:













1. The first step, which is the students accounts part, can just be removed since the processes involve on it, which are checking students accounts and checking balance, are already involve on the clearance signing part. As a student, I know that every time we will be having clearance signing, out student accounts and balances were already checked by the bookkeeper. Of course, if the bookkeeper found something incomplete on the accounts, he/she would not be signing the clearance. So, on the enrollment process, the first step can be not included since the clearance should be already completely signed prior to the enrollment period because of the fact that the people involve on the clearance signing (people who will sign the clearance, e.g. registrar, bookkeeper, etc.) are already busy on the enrollment time so they should not be disturbed for them also to concentrate on the enrollment process alone.














2. On the third step, which involves presenting enrollment requirements, advising, paying other fees, encoding, assessment of fees, and temporary COR printing, the payment of the other fees (e.g. local council, OCSC, headlight, insurance) should be done prior to the presenting enrollment requirements for the advising/pre-registration part. I have suggested it so that the process would be shorter and I know that the receipts for the other fees should be included on the advising part for the adviser to be able to give a pre-registration form to the student. On should be checked so that on the registrar part, students are already assured that they have already the receipts for the other fees.














3. On the sixth step, which is the last step on the diagram, should not serve as the last step since we all knew that we have to go to the library after having the official Certificate of Registration (COR). The library part should be included as the last step on the diagram for the enrollment. The library part should include presenting the COR for validating that a student is officially enrolled and presenting a 1x1 ID picture for the library card.

Saturday, March 13, 2010

SAD1 - Assignment 12









Infor ERP (Enterprise Resource Planning)



Flexible, low-cost ERP solutions that match the way your business works.


In today's world of globalization and price pressures, it's imperative that your enterprise resource planning systems offer business-specific solutions with industry experience built in. This is true whether you produce goods made from distinct parts and components such as automobiles, electronics, and machinery or goods made by blending ingredients such as foods, beverages, pharmaceuticals, and chemicals.


As an ERP software vendor, Infor offers a variety of ERP solutions that help companies in a wide spectrum of subsectors automate, plan, collaborate, and execute according to their unique business requirements. Built on an open, flexible, service-oriented architecture (SOA) with modern, web-based user interfaces, our scalable ERP solutions never lock you in to one mode of operating. Instead, they offer a breadth of functionality that enables you to automate key manufacturing and financial processes, meet fluctuating customer demand and compliance requirements, and collaborate internally as well as externally across your supply chain—all at a low total cost of ownership. Lean manufacturing capabilities are built in to our ERP solutions to minimize waste and increase quality and productivity; strong aftermarket service capabilities expedite service management.


With multiple deployment and buying options for Infor ERP, including Software as a Service (SaaS), manufacturers can choose the model that meets their specific requirements.


Infor ERP solutions help companies like yours:

* Reduce
operational costs and improve efficiency
* Gain
better visibility into transactions across the enterprise
* Make
better business decisions
* Deliver
the right product at the right time
* Keep
customer promises
* Adopt
manufacturing best practices, including lean


Infor's ERP solutions meet the diverse needs of today's manufacturers with robust functionality for two broad categories of manufacturing:


Production environments characterized by individual, separate unit manufacture of highly complex products—Infor's
ERP software systems offer a high degree of flexibility for order-driven manufacturing where unit volumes are typically low and lead times variable. We extend the traditional ERP footprint with unrivaled support for a multitude of cross-business and manufacturing operations, integrated business process modeling, change and compliance management, and aftermarket service support. Based on 25+ years of manufacturing excellence.


Process manufacturing production environments where ingredients are blended to formulate a whole—Infor ERP solutions can help minimize the total cost of quality, to service customers and cost of compliance, while meeting ever-increasing demand variability. We offer industry-specific functionality, advanced workflow technology, and flexible business process support with strong lot traceability, packaging, compliance, customer service, regulatory compliance, and financial capabilities. Industry expertise acquired through long-term partnerships with leading global process manufacturers is embedded.

Service Management—Power and control for customer-centric service and maintenance.

Lean Manufacturing—Enabling lean processes across your enterprise and value chain.

Quality Management—Improved quality for increased productivity.

Financials—Single, integrated modern finance solution for manufacturers.

Manufacturing—Flexibility and control for manufacturers of highly complex products.

Process Manufacturing—Proven value optimization capabilities for the process industry.

Wholesale and Distribution—Control and responsiveness for extended supply chains.


REFERENCES:

http://www.infor.com/

SAD1 - Assignment 11


CHOOSING OR DEFINING DEPLOYMENT ENVIRONMENT

Choosing a deployment strategy requires design tradeoffs; for example, because of protocol or port restrictions, or specific deployment topologies in your target environment. Identify your deployment constraints early in the design phase to avoid surprises later. To help you avoid surprises, involve members of your network and infrastructure teams to help with this process.

When choosing a deployment strategy an analyst should: understand the target physical environment for deployment; understand the architectural and design constraints based on the deployment environment; and understand the security and performance impacts of your deployment environment.


Distributed vs. Non-distributed Deployment

When creating your deployment strategy, first determine if you will use a distributed or a non-distributed deployment model. If you are building a simple application for which you want to minimize the number of required servers, consider a non-distributed deployment. If you are building a more complex application that you will want to optimize for scalability and maintainability, consider a distributed deployment. In a non-distributed deployment, all of the functionality and layers reside on a single server except for data storage functionality. In a distributed deployment, the layers of the application reside on separate physical tiers. Distributed deployment allows you to separate the layers of an application on different physical tiers.


Scale Up vs. Scale Out

Your approach to scaling is a critical design consideration. Whether you plan to scale out your solution through a Web farm, a load-balanced middle tier, or a partitioned database, you need to ensure that your design supports this. When you scale your application, you can choose from and combine two basic choices: Scale up: Get a bigger box and Scale out: Get more boxes. With this Scale Up: Get a Bigger Box, you add hardware such as processors, RAM, and network interface cards (NICs) to your existing servers to support increased capacity. To Scale Out: Get More Boxes, you add more servers and use load-balancing and clustering solutions.

Consider Design Implications and Tradeoffs up Front

You need to consider aspects of scalability that may vary by application layer, tier, or type of data. Know your tradeoffs up front and know where you have flexibility and where you do not. Scaling up and then out with Web or application servers might not be the best approach.

Stateless Components

If you have stateless components (for example, a Web front end with no in-process state and no stateful business components), this aspect of your design supports both scaling up and scaling out. Typically, you optimize the price and performance within the boundaries of the other constraints you may have.

Data

For data, decisions largely depend on the type of data:

* Static, reference, and read-only data. For this type of data, you can easily have many replicas in the right places if this helps your performance and scalability. This has minimal impact on design and can be largely driven by optimization considerations. Consolidating several logically separate and independent databases on one database server may or may not be appropriate even if you can do it in terms of capacity. Spreading replicas closer to the consumers of that data may be an equally valid approach. However, be aware that whenever you replicate, you will have a loosely synchronized system.
* Dynamic (often transient) data that is easily partitioned. This is data that is relevant to a particular user or session (and if subsequent requests can come to different Web or application servers, they all need to access it), but the data for user A is not related in any way to the data for user B.
* Core data. This type of data is well maintained and protected. This is the main case where the “scale up, then out” approach usually applies. Generally, you do not want to hold this type of data in many places because of the complexity of keeping it synchronized. This is the classic case in which you would typically want to scale up as far as you can (ideally, remaining a single logical instance, with proper clustering), and only when this is not enough, consider partitioning and distribution scale-out. Advances in database technology (such as distributed partitioned views) have made partitioning much easier, although you should do so only if you need to. This is rarely because the database is too big, but more often it is driven by other considerations such as who owns the data, geographic distribution, proximity to the consumers, and availability.

Consider Database Partitioning at Design Time

If your application uses a very large database and you anticipate an I/O bottleneck, ensure that you design for database partitioning up front. Moving to a partitioned database later usually results in a significant amount of costly rework and often a complete database redesign. Partitioning provides several benefits: The ability to restrict queries to a single partition, thereby limiting the resource usage to only a fraction of the data; The ability to engage multiple partitions, thereby getting more parallelism and superior performance because you can have more disks working to retrieve your data.


Performance Patterns

Performance deployment patterns represent proven design solutions to common performance problems. When considering a high-performance deployment, you can scale up or scale out. Scaling up entails improvements to the hardware on which you are already running. Scaling out entails distributing your application across multiple physical servers to distribute the load. A layered application lends itself more easily to being scaled out.

Affinity and User Sessions

Web applications often rely on the maintenance of session state between requests from the same user. A Web farm can be configured to route all requests from the same user to the same server—a process known as affinity—in order to maintain state where this is stored in memory on the Web server. However, for maximum performance and reliability, you should use a separate session state store with a Web farm to remove the requirement for affinity.


Reliability Patterns

Reliability deployment patterns represent proven design solutions to common reliability problems. The most common approach to improving the reliability of your deployment is to use a failover cluster to ensure the availability of your application even if a server fails.

Failover Cluster

A failover cluster is a set of servers that are configured in such a way that if one server becomes unavailable, another server automatically takes over for the failed server and continues processing.


Security Patterns

Security patterns represent proven design solutions to common security problems. The impersonation/delegation approach is a good solution when you must flow the context of the original caller to downstream layers or components in your application. The trusted subsystem approach is a good solution when you want to handle authentication and authorization in upstream components and access a downstream resource with a single trusted identity.

Impersonation/Delegation

In the impersonation/delegation authorization model, resources and the types of operations (such as read, write, and delete) permitted for each one are secured using Windows Access Control Lists (ACLs) or the equivalent security features of the targeted resource (such as tables and procedures in SQL Server). Users access the resources using their original identity through impersonation.

Trusted Subsystem

In the trusted subsystem (or trusted server) model, users are partitioned into application-defined, logical roles. Members of a particular role share the same privileges within the application. Access to operations (typically expressed by method calls) is authorized based on the role membership of the caller. With this role-based (or operations-based) approach to security, access to operations (not back-end resources) is authorized based on the role membership of the caller. Roles, analyzed and defined at application design time, are used as logical containers that group together users who share the same security privileges or capabilities within the application. The middle-tier service uses a fixed identity to access downstream services and resources.


Network Infrastructure Security Considerations

Make sure that you understand the network structure provided by your target environment, and understand the baseline security requirements of the network in terms of filtering rules, port restrictions, supported protocols, and so on. Recommendations for maximizing network security include: Identify how firewalls and firewall policies are likely to affect your application’s design and deployment. Firewalls should be used to separate the Internet-facing applications from the internal network, and to protect the database servers. These can limit the available communication ports and, therefore, authentication options from the Web server to remote application and database servers; Consider what protocols, ports, and services are allowed to access internal resources from the Web servers in the perimeter network or from rich client applications. Identify the protocols and ports that the application design requires, and analyze the potential threats that occur from opening new ports or using new protocols; Communicate and record any assumptions made about network and application layer security, and what security functions each component will handle. This prevents security controls from being missed when both development and network teams assume that the other team is addressing the issue; Pay attention to the security defenses that your application relies upon the network to provide, and ensure that these defenses are in place; Consider the implications of a change in network configuration, and how this will affect security.


Manageability Considerations

The choices you make when deploying an application affect the capabilities for managing and monitoring the application. You should take into account the following recommendations:

* Deploy components of the application that are used by multiple consumers in a single central location to avoid duplication.
* Ensure that data is stored in a location where backup and restore facilities can access it.
* Components that rely on existing software or hardware (such as a proprietary network that can only be established from a particular computer) must be physically located on the same computer.
* Some libraries and adaptors cannot be deployed freely without incurring extra cost, or may be charged on a per-CPU basis; therefore, you should centralize these features.
* Groups within an organization may own a particular service, component, or application that they need to manage locally.
* Monitoring tools such as System Center Operations Manager require access to physical machines to obtain management information, and this may impact deployment options.
* The use of management and monitoring technologies such as Windows Management Instrumentation (WMI) may impact deployment options.


REFERENCES:

http://apparchguide.codeplex.com/wikipage?title=Chapter%205%20-%20Deployment%20Patterns&referringTitle=Home

SAD1 - Assignment 10


Data Flow Diagrams (DFDs) model events and processes (i.e. activities which transform data) within a system. DFDs examine how data flows into, out of, and within the system. The DFD Principles are: a system can be decomposed into subsystems, and subsystems can be decomposed into lower level subsystems, and so on; each subsystem represents a process or activity in which data is processed. At the lowest level, processes can no longer be decomposed; each 'process' (and from now on, by 'process' we mean subsystem and activity) in a DFD has the characteristics of a system; just as a system must have input and output (if it is not dead), so a process must have input and output; data enters the system from the environment; data flows between processes within the system; and data is produced as output from the system. The 'Context Diagram ' is an overall, simplified, view of the target system, which contains only one process box and the primary inputs and outputs. The Top or 1st level DFD, describes the whole of the target system. It 'bounds' the system under consideration. Data Flow Diagrams show: the processes within the system; the data stores (files) supporting the system's operation; the information flows within the system; the system boundary; and interactions with external entities.


DFD Notations







Processes, in other methodologies, may be called 'Activities', 'Actions', 'Procedures', 'Subsystems' etc. They may be shown as a circle, an oval, or (typically) a rectangular box. Data are generally shown as arrows coming to, or going from the edge of a process box.








General Data Flow Rules


1. Entities are either 'sources of' or 'sinks' for data input and outputs - i.e. they are the originators or terminators for data flows.
2. Data flows from Entities must flow into Processes
3. Data flows to Entities must come from Processes
4. Processes and Data Stores must have both inputs and outputs (What goes in must come out!)
5. Inputs to Data Stores only come from Processes.
6. Outputs from Data Stores only go to Processes.


The Process Symbol

Processes transform or manipulate data. Each box has a unique number as identifier (top left) and a unique name (an imperative - e.g. 'do this' - statement in the main box area). The top line is used for the location of, or the people responsible for, the process. Processes are 'black boxes' - we don't know what is in them until they are decomposed. Processes transform or manipulate input data to produce output data. Except in rare cases, you can't have one without the other.


Data Flows

Data Flows depict data/information flowing to or from a process. The arrows must either start and/or end at a process box. It is impossible for data to flow from data store to data store except via a process, and external entities are not allowed to access data stores directly. Arrows must be named. Double ended arrows may be used with care.


External Entities

External Entities, also known as 'External sources/recipients, are things (e.g.: people, machines, organizations etc.) which contribute data or information to the system or which receive data/information from it. The name given to an external entity represents a Type not a specific instance of the type. When modeling complex systems, each external entity in a DFD will be given a unique identifier. It is common practice to have duplicates of external entities in order to avoid crossing lines, or just to make a diagram more readable.


Data Stores

Data Stores are some location where data is held temporarily or permanently. In physical DFDs there can be 4 types.

D = computerised Data
M = Manual, e.g. filing cabinet.
T = Transient data file, e.g. temporary program file
T(M) = Transient Manual, e.g. in-tray, mail box.


As with external entities, it is common practice to have duplicates of data stores to make a diagram less cluttered.

REFERENCES:

http://www.cems.uwe.ac.uk/~tdrewry/dfds.htm

SAD1 - Assignment 9


A data flow diagram models the system as a network of functional processes and its data. It documents the system’s processes, data stores, flows which carry data, and terminators which are the external entities with which the system communicates.

































SAD1 - Assignment 8


An activity diagram is a UML diagram that is used to model a process. It models the actions (or behaviors) performed by the components of a business process or IT system, the order in which the actions take place, and the conditions that coordinate the actions in a specific order. Activity diagrams use swim lanes to group actions together. Actions can be grouped by the actor performing the action or by the distinct business process or system that is performing the action.


































































































































MIS2 - Assignment 9

The existing models of information technology (IT) acceptance were developed with the concept of the static individual computing environment in mind. As such, in today's rapidly changing IT environment, they do not serve as adequate indicators of an individual's IT usage behavior.

“The rate and magnitude of change are rapidly outpacing the complex of theories -- economic, social, and philosophical - - on which public and private decisions are based. To the extent that we continue to view the world from the perspective of an earlier, vanishing age, we will continue to misunderstand the developments surrounding the transition to an information society, be unable to realize the full economic and social potential of this revolutionary technology, and risk making some very serious mistakes as reality and the theories we use to interpret it continue to diverge." - Cordell (1987)

The three changes that likely to have substantial impact on USEP in the next three years are the following:

1. Electronic Processing of all services

Electronic Data Processing (EDP) can refer to the use of automated methods to process commercial data. Typically, this uses relatively simple, repetitive activities to process large volumes of similar information. For example: stock updates applied to an inventory, banking transactions applied to account and customer master files, booking and ticketing transactions to an airline's reservation system, billing for utility services. Its advantages are: speed, it operates the speed of electric flow which is measured in billions and trillionth of a second. It is faster than any other machine designed to do similar works; accuracy, high speed processing by computer is accompanied by high accuracy results the electronic circuitry of computer is such that, when the machine are programmed correctly and when incoming data is error free, the accuracy of the output is relatively assured; automatic operation, an electronic computer can carry out sequence of many data processing operations without human interaction, the various operations are executed by way of a stored computer program; decision making capability, a computer can perform certain decision instruction automatically; compact storage, electronic data processing system have the ability to store large amounts of data in compact and easily retrievable form; discipline imposes, to solve problem-with computer you must, first understand the problem, second, program the computer to give you right answers. Understand a problem is one thing but understanding it to the depth of detail and insight required to program the computer is a completely different matter.

2. Virtual Learning

A virtual learning environment (VLE) is a set of teaching and learning tools designed to enhance a student's learning experience by including computers and the Internet in the learning process. The principal components of a VLE package include curriculum mapping (breaking curriculum into sections that can be assigned and assessed), student tracking, online support for both teacher and student, electronic communication (e-mail, threaded discussions, chat, Web publishing), and Internet links to outside curriculum resources. Its advantages are: learning without any restriction as to time or space; courses based on modules with flexible time schemes, which take individual learning needs into account; and greater responsibility taken by students in the learning process.

3. RFID

RFID stands for Radio-Frequency IDentification. The acronym refers to small electronic devices that consist of a small chip and an antenna. The chip typically is capable of carrying 2,000 bytes of data or less. The RFID device serves the same purpose as a bar code or a magnetic strip on the back of a credit card or ATM card; it provides a unique identifier for that object. And, just as a bar code or magnetic strip must be scanned to get the information, the RFID device must be scanned to retrieve the identifying information. Its advantages are: RFID tags are very simple to install/inject inside the body of animals, thus helping to keep a track on them. This is useful in animal husbandry and on poultry farms. The installed RFID tags give information about the age, vaccinations and health of the animals; RFID technology is better than bar codes as it cannot be easily replicated and therefore, it increases the security of the product.; Supply chain management forms the major part of retail business and RFID systems play a key role by managing updates of stocks, transportation and logistics of the product; Barcode scanners have repeatedly failed in providing security to gems and jewelries in shops. But nowadays, RFID tags are placed inside jewelry items and an alarm is installed at the exit doors; The RFID tags can store data up to 2 KB whereas, the bar code has the ability to read just 10-12 digits.

REFERENCES:

http://wiki.answers.com/Q/Advantages_of_electronic_data_processing
http://whatis.techtarget.com/definition/0,,sid9_gci866691,00.html
http://www.friends-partners.org/utsumi/Global_University/Global%20University%20System/Tapio%27s_Slides_Virtual_Learning/tsld008.htm
http://www.technovelgy.com/ct/Technology-Article.asp?ArtNum=1

Friday, February 5, 2010

SAD1 - Assignment 7


Consider USEP's pre-enrollment system, develop a use case diagram and write a brief use case description. A use case is a methodology used in system analysis to identify, clarify, and organize system requirements. The use case is made up of a set of possible sequences of interactions between systems and users in a particular environment and related to a particular goal. It consists of a group of elements (for example, classes and interfaces) that can be used together in a way that will have an effect larger than the sum of the separate elements combined. The use case should contain all system activities that have significance to the users. A use case can be thought of as a collection of possible scenarios related to a particular goal, indeed, the use case and goal are sometimes considered to be synonymous.






















































































































MIS2 - Assignment 8

fast forward ..., you were hired and have been tasked to develop a strategic information systems plan for a company. The company officers have extended an invitation for you to meet with them to discuss the direction of the company. Before this meeting, they have asked that you provide a list of questions with some explanation about the "why" of the question so they can be prepared, thus maximizing the output from this meeting.

Develop a list of questions you would ask the officers of the company and give an explanation and justification for each question.

***Below are the questions that I formulated for the upcoming meeting for the development of the SISP of the company.

1. Why did you have chosen me to develop a strategic information systems plan for your company?

Reasons why they have chosen me to develop the SISP of their company.

2. What would you want the company to achieve?

This means the goals of the company. Goals can be summarized in the phrase "dream with a deadline," a goal is an observable and measurable end result having one or more objectives to be achieved within a more or less fixed timeframe. In comparison, a 'purpose' is an intention (internal motivational state) or mission. The question, "Has the goal been achieved?" can always be answered with either a "Yes" or "No." A purpose, however, is not 'achieved' but instead is pursued everyday.

3. What are the strengths and weaknesses of your company?

SWOT analysis is a strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a project or in a business venture. It involves specifying the objective of the business venture or project and identifying the internal and external factors that are favorable and unfavorable to achieving that objective.

4. What do you think is the reason why we have to develop SISP for your company?

Strategic planning focuses largely on managing interaction with environmental forces, which include competitors, government, suppliers, customers, various interest groups and other factors that affect the business of the company and its prospects.

5. What are your objectives for the company?

Objective is the desired or needed result to be achieved by a specific time. An objective is broader than a goal, and one objective can be broken down into a number of specific goals.

6. Would you implement whatever SISP I would develop?

The answer would depend if they would like whatever SISP I would develop for the company.

7. Can your company afford the budget to be allocated for the SISP to be implemented?

A company would not hire someone to develop SISP for them if they have no budget allocated for it.

8. What are the existing systems in your company?

Legacy systems are important for me to know to be able to include it on the SISP that I would develop.

Saturday, January 30, 2010

SAD1 - Assignment 6

Consider the following dialogue between a systemsprofessional, John Juan, and a manager of a department targeted for a new information system, Peter Pedro:

Juan: The way to go about the analysis is to first examine the old system, such as reviewing key documents and observing the workers perform their tasks. Then we can determine which aspects are working well and which should be preserved.

Pedro: We have been through these types of projects before and what always ends up happening is that we do not get the new system we are promised; we get a modified version of the old system.

Juan: Well, I can assure you that will not happen this time. We just want a thorough understanding of what is working well and what isn’t.

Pedro: I would feel much more comfortable if we first started with a list of our requirements. We should spend some time up-front determining exactly what we want the system to do for my department. Then you systems people can come in and determine what portions to salvage if you wish. Just don’t constrain us to the old system.

Required:

a.Obviously these two workers have different views on how the systems analysis phase should be conducted. Comment on whose position you sympathize with the most.

b.What method would you propose they take? Why?


Well, before I start, let me first define analysis phase and it's categories of analysis.

The analysis phase is the building block of a training program. The basis for who must be trained, what must be trained, when training will occur, and where the training will take place are accomplished in this phase. The product of this phase is the foundation for all subsequent development activities. The analysis phase is often called a Front-End Analysis. That is, although you might perform analysis throughout the ISD process, such as in the design and development phases, this "front end" of the ISD process is where the main problem identification is performed.

When performing an analysis, it is best to take a long term approach to ensure that the performance improvement initiative ties in with the organization's vision, mission, and values. This connects each need with a metric to ensure that it actually does what it is supposed to do. This is best accomplished by linking performance analysis needs with Kirkpatrick's Four Levels of Evaluations, which means their are four catagories of analysis (Phillips, 2002).

Business Needs

Investigate the problem or performance initiative and see how it supports the mission statement, leader's vision, and/or organizational goals, etc. Fixing a problem or making a process better is just as good as an ROI, if not better. Organizations that focus strictly on ROI are normally focusing on cost-cutting. And you can only cut costs so far before you start stripping out the core parts of a business. A much better approach is to improve a performance or process that supports a key organization goal, vision, or mission. When senior executives were asked the most important training initiatives, 77% cited, "aligning learning strategies with business goals"; 75% cited, "ensuring learning content meets workforce requirements"; and 72%, "boosting productivity and agility" (Training Magazine, Oct 2004). Thus, senior leadership is not looking at training to be a profit center (that is what other business units are for), rather they are looking at performance improvement initiatives to help "grow" the organization so that it can reach its goals and perform its mission. The goal is to make an impact or get some sort of result. So once you have identified the gap between present performance and the organization's goals and vision; create a level 4 evaluation (impact) that measures it -- that is, what criteria must be met in order to show that the gap has actually been bridged?

Job Performance Needs

While the first analysis looked at business needs, this analysis looks at the job performance needs and these two needs could slightly differ. The first need, business, often has a slightly more visionary or future look to it, while the job performance need normally looks at what is needed now. Thus, business needs often tend to be more developmental in nature (future orientated), while job performance needs are normally more related towards the present. This is perhaps the most important need to look at as it links the performer with the organization. When analyzing job performance, you want to look at the entire spectrum that surrounds the job: processes, environment, actual performance verses need performance, etc, thus it often helps to divide the analysis into three groups: people, data, and things.

Training Needs

As you assess the performance for any needed interventions, look at the Job/Performer requirements, that is, what the performer needs to know in order for the performance intervention to be successful. In addition, look at how you are going to evaluate any learning requirements (level 2). It is one thing to determine the learning needs (skill, knowledge, & self system [attitude, metacognition, etc.]), but it is quite another thing to ensure that those requirements actually take place.

Individual Needs

It ensures that the performance intervention actually conforms to the individual requirements. For example, in the Training Needs analysis, it might be determined that the job holders need to learn a new process. In this need analysis, the target population is looked at more closely to determine the actual content, context, and delivery method of the performance intervention.

***On the dialogue above, obviously I would prefer to sympathize with Peter Pedro since he is the manager of a department targeted for a new information system. It was stated about the analysis phase that when performing an analysis, it is best to take a long term approach to ensure that the performance improvement initiative ties in with the organization's vision, mission, and values. This connects each need with a metric to ensure that it actually does what it is supposed to do. Since Peter Pedro is the manager of the department, well he obviously knows what is best for the information system that will be implemented on his department. He is the one who surely knows all the transactions being processed in the department which will be appended on the information system that John Juan would be developing.

***I would propose they will take the method that Peter Pedro had suggested for the analysis phase of the system. Since Peter Pedro is the acting client on the proposed system, of course he would be the one to specify the needs of his department that the system would have. Understanding a clients business is central to developing the right solution and the analysis stage allows Tectura to develop this knowledge. It is also the stage where John Juan and the other systems people work with their clients such as Peter Pedro to examine the standard software functionality and whether their specific business requirements will mean modifications or customisation of the standard software.

http://www.nwlink.com/~donclark/hrd/sat2.html

http://www.au.tectura.com/Page/cm95/Analsis_phase_95.asp?d=1

MIS2 - Assignment 7

Arguably the most popular search engine available today, Google is widely known for its unparalleled search engine technology, embodied in the web page ranking algorithm, Page Ranking and running on an efficient distributed computer system. In fact, the verb “to Google” has ingrained itself in the vernacular as a synonym of “[performing] a web search.” The key to Google’s success has been its strategic use of both software and hardware information technologies. The IT infrastructure behind the search engine includes huge storage databases and numerous server farms to produce significant computational processing power. These critical IT components are distributed across multiple independent computers that provide parallel computing resources. This architecture has allowed Google’s business to reach a market capital over $100 billion and become one of the most respected and admirable companies in the world.

Google is one such name in the Technology arena that is well poised to rule. Talking of past decade, it’s been all the way up for Google and undoubtedly they have been ruling the internet economy. Google have had its impact in the industry with more than 150 products and will continue to grow with its ever increasing portfolio of products.

Google’s Competitors

Google face competition in every aspect of their rapidly evolving business, particularly from other companies that seek to connect people with online information and provide them with relevant advertising. Currently Google consider their primary competitors to be Microsoft and Yahoo.

But in a blog that I’ve read, the author predicted the 10 companies that will become the 10 Toughest Competitors of Google in 2010. And these are:

1. Apple

Being from partners to rivals, Apple is one of the stringent opponents for Google in the year 2010. Today, Apple and Google have been locking their horns in the field of Smartphone, Mobile App Store, OS, Mobile Ad, and Online Music and so on. Likewise, Apple is more than up to the task of battling Google in these areas as well as browsers, where Google Chrome competes against Apple Safari. But battle between will intensify, as the market for the digital music and SmartPhones is all set for growth in 2010. Google’s music search along with its partner MySpace and Pandora are looking to compete with Apple’s iTunes, which was the No 1 music retailer in United States in 2009. Further, Google’s Android will have tough time as Apple’s iPhones continues to grab hold of the market all round the globe.

2. Microsoft

Microsoft is a company that have had one of the most dominant impacts in the IT industry. So without a doubt it is Google’s biggest adversary in 2010 and these two giants will be locking their horns for market supremacy in areas such as search, collaboration tools and browsers. Talking of these two giants, Google has reigned as leaders in search, but with release of BING in May 2009, Microsoft has raised few questions amongst in Google’s management team. With features such as ranking search results based on relevancy to other users, Microsoft has inked Bing-related deals with Twitter, Facebook and Yahoo. Microsoft continued to enhance Bing, adding image search and mapping. But in response Google have unveiled real time search. In December, Google also added a photo search capability, a dictionary and a translator that finds relevant content in 40 languages. Entering 2010, Google still dominates search, with more than 70% of the market. Apart from search, the battle is likely to focus on cloud based collaboration tool. Google Apps is designed to undercut sales of Microsoft products, including Exchange and SharePoint. Microsoft has responded with Office Web Apps, free Web-based versions of Word, Excel, PowerPoint and OneNote that are due out in 2010. Last but not the least; the browser war between these two is giants are likely to heat up in 2010. So 2010 awaits the answer if ever so popular Microsoft’s premier browser’s market share could be brought down by Google’s Chrome.

3. Amazon

In 2009, Google’s effort of scanning millions of out-of-print books and incorporating them in online search did gain up some momentum and helped themselves to publish over 500000 digital books for free to customers of Sony Reader and Barnes & Noble Nook, which is due in January. Further, there claims of opening up Google Editions, an e-book store, has opened up new rivalry with Amazon. Amazon with its Kindle e-book reader is one of the leaders in e-book reader’s market. The other area where Google is taking on Amazon is in cloud computing. Google’s Apps Engine, a newbie cloud computing platform that allows developers to create their own Web applications and run them on Google’s infrastructure will be competing with Amazon’s Elastic Computing Cloud (EC2) which has already grab hold of market with its several upgrade after its release in 2006. So it will be a great battle to watch when these two giants fight for market supremacy on Cloud computing and E-book readership.

4. Facebook


Facebook, probably the most popular stuff in the internet right now, has attracted 350 million active users in just six years and is subject of interest for the guys at Google too. In 2010, Google and Facebook rivalry is likely to heat up based on question that where will people find there information in future in Search or Social Network? With ever increasing use of social networking and the rise of Facebook, Google’s worry seems to a viable one. So, in 2010 Google with its ORKUT will be in battle with Facebook. Orkut offers Google Friend Connect, a tool for Web publishers to add social networking content to their sites, in direct competition with similarly named Facebook Connect. Meanwhile, Facebook has sought out relationships with several arch-enemies of Google, including Microsoft and Yahoo. So its for sure that this battle is worth taking a note off in 2010.

5. Twitter

No doubt if Facebook is in rise, than it’s no difference with Twitter. If social networking is the way to go, then Google will certainly find Twitter in its way. Twitter, a micro-blogging site, has in a way revolutionized the way we communicate these days. So, Google’s Friend Connect will face tough competitions for Twitter’s Connect in 2010 as Twitter looks to move up the rank in the areas of Social Networking. Other areas where these two find themselves competing are Real time search. Google’s real time search and Twitter’s will be trying to outperform each other in 2010. So, this battle will be a good one to watch for in 2010.

6. Mozilla

With release of Google Chrome, Google has stepped into ever so popular browse battle. Mozilla has been in the markets for years and now this step from Google is likely to create the conflict of interest between these two. Of late the war between the two has heated up even more. The battle has now gone to default search. Mozilla now has shown intent to kick Google out from its default search engine status. The latest rumours on the internet show that Mozilla is now eyeing to get a deal with Microsoft to make Bing as its default search engine in Firefox. This may not impact Google immediately but eventually this move, if comes true, is likely to decrease Google’s share of the search market. Hence, Google now has Mozilla on a double war zone; first the obvious browser war and now the war over default searches.

7. Yahoo

When it comes to search, one of Google’s biggest competitors besides Microsoft is Yahoo. Yahoo has been in the market with variety of products in areas of email, Messenger, News, Search and Analytics services. So without doubt it will be a fearsome competitor for Google. In 2009, Yahoo made some improvements in 2009 by integrating search with its rich content. Users can watch videos or stream music straight from the Yahoo search results page. Yahoo also helps users find travel deals and compare product prices. Further, Yahoo has recently added Twitter to its search Page and if a joint search and advertising deal between Yahoo and Microsoft is approved by federal regulators. This could prove costly to Google so the 2010 is the year to watch as other competitor look to outperform Google in the market with different joint forces being formed by their rivals.

8. Cisco

Google definitely has a tough challenge against Cisco. With years of experience on web based collaborative platfomr, WebEx, and superior VOIP service, Cisco poses a threat to Google’s Wave and Voice. In addition to this, Cisco also is looking to enhance its video conferencing quality by focusing on collaboration through intenret video, desktop video and consumer Telepresence. In addition to this, Cisco’s presence in Cloud is another leading edge it has over Google. As Google is looking to take everything to the web, it certainly will face a good competition from Cisco on this front. Moreover, according to Networkworld, Cisco is looking to enter into Smartphone market in the very near future (actually by mid-2010). Its recent acquisition of Pure Digital and Flip shows Cisco’s intent to take video to the mobile phone. Thus, we might see Cisco giving a hard time to Google’s Nexus One in the coming days.

9. IBM

By now it’s quite crystal clear that 2010 will the year where big internet giants will be trying to gain whole lot of market share that will be up for grab in areas of collaboration tools. So, 2010 is likely to reopen Google’s rivalry with IBM with the release of new collaboration tools such as Google Wave. Google has stepped into the battle field with its low cost hosted collaboration tools such as Google Apps. Google will compete against IBM’s Lotus Lives, which has attracted more than 2 million businesses in the last two years.

10. Nokia

Today, Nokia has had grab hold of the mobile phone market with 4 out of 10 mobiles sold. With increase in use of smart phones, means the IT giants Google will be in rivalry with Nokia in periphery of operating systems for Smartphones. Symbian Open source operating system will be competing with Google’s Android. Nokia with recent deals with Microsoft is all set to bring Office Mobile to Symbian devices. With claim of releasing improved version of Symbian in 2010 means Google Android will have to face off tough battle. But, Google’s Android is poised for major developments in 2010 and with commitments from Acer, Sony Ericcson, HTC and Motorola this will be a worthwhile battle to watch in 2010 and years to come. So, at this point one may feel Google has tough battle to fight in 2010. Most of the arch rivals are gearing up to poise serious threats either single handed or with collaboration. So, 10 line ups of interesting battle is all set to keep the 2010 interesting enough for us to watch and keep the Google on their toes.

GOOGLE’S BUSINESS MODEL AND STRATEGY

Business Model

Since its beginning as a research project from two Computer Science doctorate students at Stanford University, Google has continued to follow its mission “to organize the world's information and make it universally accessible and useful.”2 From Google’s founding in 1997 until 2000, the company did not have a well-defined business model to generate revenues. In 2001, Google’s two co-founders hired Eric Schmidt, the chairman and CEO of Novell and former CTO at Sun Microsystems, as the new CEO of Google to help drive the effort in creating a business model for Google. With new management leadership, Google created a core business in online advertising, enabled by the millions of users using its search engine everyday. Revenue generation and profit growth in online advertising came from both Google’s search engine homepage and partner websites that display Google sponsored advertisements. Google created a cost-per-click pricing scheme for sponsored advertisements such that advertisers only pay a base fee, and for the number of referrals to their site.

Business Strategy

Google is generally secretive about its business strategy, but it is evident that Google is building the foundation for all of its products and services under the central theme of leveraging advanced search technology and personalized advertising. To maintain its reputation as a forefront technology leader and innovator, Google has been aggressively acquiring software start-up companies that can be easily integrated into its existing solutions, and can instantaneously gain visibility through Google’s leverage. However, this strategy of growing through small acquisitions is also used by Yahoo, one of Google’s major competitors, although the underlying methodology of the acquisitions is different. Yahoo’s acquisitions have been focused on acquiring search technology companies having specialized search functionalities. Yahoo has a group of search technologies for different products and services, while Google has only one search technology. Over time with greater competition, an online advertising network may be commoditized and Google will need to develop new business models to entice new customers and enhance relationships with existing ones for customer lock-ins (Elgin, 2004). For existing customers, Google has Advanced Tools & Reporting to support sophisticated advertisers, and Google plans to tighten integration with other Google related products in advertising. To reach new markets faster, Google is expanding its advertising business beyond online marketing to other mediums, including radio and print.

http://investor.google.com/faq.html#competitors
http://technology.globalthoughtz.com/index.php/10-toughest-competitors-of-google-in-2010/
http://www.crito.uci.edu/papers/2007/Google.pdf

;;

Template by:
Free Blog Templates