Saturday, March 13, 2010

SAD1 - Assignment 12









Infor ERP (Enterprise Resource Planning)



Flexible, low-cost ERP solutions that match the way your business works.


In today's world of globalization and price pressures, it's imperative that your enterprise resource planning systems offer business-specific solutions with industry experience built in. This is true whether you produce goods made from distinct parts and components such as automobiles, electronics, and machinery or goods made by blending ingredients such as foods, beverages, pharmaceuticals, and chemicals.


As an ERP software vendor, Infor offers a variety of ERP solutions that help companies in a wide spectrum of subsectors automate, plan, collaborate, and execute according to their unique business requirements. Built on an open, flexible, service-oriented architecture (SOA) with modern, web-based user interfaces, our scalable ERP solutions never lock you in to one mode of operating. Instead, they offer a breadth of functionality that enables you to automate key manufacturing and financial processes, meet fluctuating customer demand and compliance requirements, and collaborate internally as well as externally across your supply chain—all at a low total cost of ownership. Lean manufacturing capabilities are built in to our ERP solutions to minimize waste and increase quality and productivity; strong aftermarket service capabilities expedite service management.


With multiple deployment and buying options for Infor ERP, including Software as a Service (SaaS), manufacturers can choose the model that meets their specific requirements.


Infor ERP solutions help companies like yours:

* Reduce
operational costs and improve efficiency
* Gain
better visibility into transactions across the enterprise
* Make
better business decisions
* Deliver
the right product at the right time
* Keep
customer promises
* Adopt
manufacturing best practices, including lean


Infor's ERP solutions meet the diverse needs of today's manufacturers with robust functionality for two broad categories of manufacturing:


Production environments characterized by individual, separate unit manufacture of highly complex products—Infor's
ERP software systems offer a high degree of flexibility for order-driven manufacturing where unit volumes are typically low and lead times variable. We extend the traditional ERP footprint with unrivaled support for a multitude of cross-business and manufacturing operations, integrated business process modeling, change and compliance management, and aftermarket service support. Based on 25+ years of manufacturing excellence.


Process manufacturing production environments where ingredients are blended to formulate a whole—Infor ERP solutions can help minimize the total cost of quality, to service customers and cost of compliance, while meeting ever-increasing demand variability. We offer industry-specific functionality, advanced workflow technology, and flexible business process support with strong lot traceability, packaging, compliance, customer service, regulatory compliance, and financial capabilities. Industry expertise acquired through long-term partnerships with leading global process manufacturers is embedded.

Service Management—Power and control for customer-centric service and maintenance.

Lean Manufacturing—Enabling lean processes across your enterprise and value chain.

Quality Management—Improved quality for increased productivity.

Financials—Single, integrated modern finance solution for manufacturers.

Manufacturing—Flexibility and control for manufacturers of highly complex products.

Process Manufacturing—Proven value optimization capabilities for the process industry.

Wholesale and Distribution—Control and responsiveness for extended supply chains.


REFERENCES:

http://www.infor.com/

SAD1 - Assignment 11


CHOOSING OR DEFINING DEPLOYMENT ENVIRONMENT

Choosing a deployment strategy requires design tradeoffs; for example, because of protocol or port restrictions, or specific deployment topologies in your target environment. Identify your deployment constraints early in the design phase to avoid surprises later. To help you avoid surprises, involve members of your network and infrastructure teams to help with this process.

When choosing a deployment strategy an analyst should: understand the target physical environment for deployment; understand the architectural and design constraints based on the deployment environment; and understand the security and performance impacts of your deployment environment.


Distributed vs. Non-distributed Deployment

When creating your deployment strategy, first determine if you will use a distributed or a non-distributed deployment model. If you are building a simple application for which you want to minimize the number of required servers, consider a non-distributed deployment. If you are building a more complex application that you will want to optimize for scalability and maintainability, consider a distributed deployment. In a non-distributed deployment, all of the functionality and layers reside on a single server except for data storage functionality. In a distributed deployment, the layers of the application reside on separate physical tiers. Distributed deployment allows you to separate the layers of an application on different physical tiers.


Scale Up vs. Scale Out

Your approach to scaling is a critical design consideration. Whether you plan to scale out your solution through a Web farm, a load-balanced middle tier, or a partitioned database, you need to ensure that your design supports this. When you scale your application, you can choose from and combine two basic choices: Scale up: Get a bigger box and Scale out: Get more boxes. With this Scale Up: Get a Bigger Box, you add hardware such as processors, RAM, and network interface cards (NICs) to your existing servers to support increased capacity. To Scale Out: Get More Boxes, you add more servers and use load-balancing and clustering solutions.

Consider Design Implications and Tradeoffs up Front

You need to consider aspects of scalability that may vary by application layer, tier, or type of data. Know your tradeoffs up front and know where you have flexibility and where you do not. Scaling up and then out with Web or application servers might not be the best approach.

Stateless Components

If you have stateless components (for example, a Web front end with no in-process state and no stateful business components), this aspect of your design supports both scaling up and scaling out. Typically, you optimize the price and performance within the boundaries of the other constraints you may have.

Data

For data, decisions largely depend on the type of data:

* Static, reference, and read-only data. For this type of data, you can easily have many replicas in the right places if this helps your performance and scalability. This has minimal impact on design and can be largely driven by optimization considerations. Consolidating several logically separate and independent databases on one database server may or may not be appropriate even if you can do it in terms of capacity. Spreading replicas closer to the consumers of that data may be an equally valid approach. However, be aware that whenever you replicate, you will have a loosely synchronized system.
* Dynamic (often transient) data that is easily partitioned. This is data that is relevant to a particular user or session (and if subsequent requests can come to different Web or application servers, they all need to access it), but the data for user A is not related in any way to the data for user B.
* Core data. This type of data is well maintained and protected. This is the main case where the “scale up, then out” approach usually applies. Generally, you do not want to hold this type of data in many places because of the complexity of keeping it synchronized. This is the classic case in which you would typically want to scale up as far as you can (ideally, remaining a single logical instance, with proper clustering), and only when this is not enough, consider partitioning and distribution scale-out. Advances in database technology (such as distributed partitioned views) have made partitioning much easier, although you should do so only if you need to. This is rarely because the database is too big, but more often it is driven by other considerations such as who owns the data, geographic distribution, proximity to the consumers, and availability.

Consider Database Partitioning at Design Time

If your application uses a very large database and you anticipate an I/O bottleneck, ensure that you design for database partitioning up front. Moving to a partitioned database later usually results in a significant amount of costly rework and often a complete database redesign. Partitioning provides several benefits: The ability to restrict queries to a single partition, thereby limiting the resource usage to only a fraction of the data; The ability to engage multiple partitions, thereby getting more parallelism and superior performance because you can have more disks working to retrieve your data.


Performance Patterns

Performance deployment patterns represent proven design solutions to common performance problems. When considering a high-performance deployment, you can scale up or scale out. Scaling up entails improvements to the hardware on which you are already running. Scaling out entails distributing your application across multiple physical servers to distribute the load. A layered application lends itself more easily to being scaled out.

Affinity and User Sessions

Web applications often rely on the maintenance of session state between requests from the same user. A Web farm can be configured to route all requests from the same user to the same server—a process known as affinity—in order to maintain state where this is stored in memory on the Web server. However, for maximum performance and reliability, you should use a separate session state store with a Web farm to remove the requirement for affinity.


Reliability Patterns

Reliability deployment patterns represent proven design solutions to common reliability problems. The most common approach to improving the reliability of your deployment is to use a failover cluster to ensure the availability of your application even if a server fails.

Failover Cluster

A failover cluster is a set of servers that are configured in such a way that if one server becomes unavailable, another server automatically takes over for the failed server and continues processing.


Security Patterns

Security patterns represent proven design solutions to common security problems. The impersonation/delegation approach is a good solution when you must flow the context of the original caller to downstream layers or components in your application. The trusted subsystem approach is a good solution when you want to handle authentication and authorization in upstream components and access a downstream resource with a single trusted identity.

Impersonation/Delegation

In the impersonation/delegation authorization model, resources and the types of operations (such as read, write, and delete) permitted for each one are secured using Windows Access Control Lists (ACLs) or the equivalent security features of the targeted resource (such as tables and procedures in SQL Server). Users access the resources using their original identity through impersonation.

Trusted Subsystem

In the trusted subsystem (or trusted server) model, users are partitioned into application-defined, logical roles. Members of a particular role share the same privileges within the application. Access to operations (typically expressed by method calls) is authorized based on the role membership of the caller. With this role-based (or operations-based) approach to security, access to operations (not back-end resources) is authorized based on the role membership of the caller. Roles, analyzed and defined at application design time, are used as logical containers that group together users who share the same security privileges or capabilities within the application. The middle-tier service uses a fixed identity to access downstream services and resources.


Network Infrastructure Security Considerations

Make sure that you understand the network structure provided by your target environment, and understand the baseline security requirements of the network in terms of filtering rules, port restrictions, supported protocols, and so on. Recommendations for maximizing network security include: Identify how firewalls and firewall policies are likely to affect your application’s design and deployment. Firewalls should be used to separate the Internet-facing applications from the internal network, and to protect the database servers. These can limit the available communication ports and, therefore, authentication options from the Web server to remote application and database servers; Consider what protocols, ports, and services are allowed to access internal resources from the Web servers in the perimeter network or from rich client applications. Identify the protocols and ports that the application design requires, and analyze the potential threats that occur from opening new ports or using new protocols; Communicate and record any assumptions made about network and application layer security, and what security functions each component will handle. This prevents security controls from being missed when both development and network teams assume that the other team is addressing the issue; Pay attention to the security defenses that your application relies upon the network to provide, and ensure that these defenses are in place; Consider the implications of a change in network configuration, and how this will affect security.


Manageability Considerations

The choices you make when deploying an application affect the capabilities for managing and monitoring the application. You should take into account the following recommendations:

* Deploy components of the application that are used by multiple consumers in a single central location to avoid duplication.
* Ensure that data is stored in a location where backup and restore facilities can access it.
* Components that rely on existing software or hardware (such as a proprietary network that can only be established from a particular computer) must be physically located on the same computer.
* Some libraries and adaptors cannot be deployed freely without incurring extra cost, or may be charged on a per-CPU basis; therefore, you should centralize these features.
* Groups within an organization may own a particular service, component, or application that they need to manage locally.
* Monitoring tools such as System Center Operations Manager require access to physical machines to obtain management information, and this may impact deployment options.
* The use of management and monitoring technologies such as Windows Management Instrumentation (WMI) may impact deployment options.


REFERENCES:

http://apparchguide.codeplex.com/wikipage?title=Chapter%205%20-%20Deployment%20Patterns&referringTitle=Home

SAD1 - Assignment 10


Data Flow Diagrams (DFDs) model events and processes (i.e. activities which transform data) within a system. DFDs examine how data flows into, out of, and within the system. The DFD Principles are: a system can be decomposed into subsystems, and subsystems can be decomposed into lower level subsystems, and so on; each subsystem represents a process or activity in which data is processed. At the lowest level, processes can no longer be decomposed; each 'process' (and from now on, by 'process' we mean subsystem and activity) in a DFD has the characteristics of a system; just as a system must have input and output (if it is not dead), so a process must have input and output; data enters the system from the environment; data flows between processes within the system; and data is produced as output from the system. The 'Context Diagram ' is an overall, simplified, view of the target system, which contains only one process box and the primary inputs and outputs. The Top or 1st level DFD, describes the whole of the target system. It 'bounds' the system under consideration. Data Flow Diagrams show: the processes within the system; the data stores (files) supporting the system's operation; the information flows within the system; the system boundary; and interactions with external entities.


DFD Notations







Processes, in other methodologies, may be called 'Activities', 'Actions', 'Procedures', 'Subsystems' etc. They may be shown as a circle, an oval, or (typically) a rectangular box. Data are generally shown as arrows coming to, or going from the edge of a process box.








General Data Flow Rules


1. Entities are either 'sources of' or 'sinks' for data input and outputs - i.e. they are the originators or terminators for data flows.
2. Data flows from Entities must flow into Processes
3. Data flows to Entities must come from Processes
4. Processes and Data Stores must have both inputs and outputs (What goes in must come out!)
5. Inputs to Data Stores only come from Processes.
6. Outputs from Data Stores only go to Processes.


The Process Symbol

Processes transform or manipulate data. Each box has a unique number as identifier (top left) and a unique name (an imperative - e.g. 'do this' - statement in the main box area). The top line is used for the location of, or the people responsible for, the process. Processes are 'black boxes' - we don't know what is in them until they are decomposed. Processes transform or manipulate input data to produce output data. Except in rare cases, you can't have one without the other.


Data Flows

Data Flows depict data/information flowing to or from a process. The arrows must either start and/or end at a process box. It is impossible for data to flow from data store to data store except via a process, and external entities are not allowed to access data stores directly. Arrows must be named. Double ended arrows may be used with care.


External Entities

External Entities, also known as 'External sources/recipients, are things (e.g.: people, machines, organizations etc.) which contribute data or information to the system or which receive data/information from it. The name given to an external entity represents a Type not a specific instance of the type. When modeling complex systems, each external entity in a DFD will be given a unique identifier. It is common practice to have duplicates of external entities in order to avoid crossing lines, or just to make a diagram more readable.


Data Stores

Data Stores are some location where data is held temporarily or permanently. In physical DFDs there can be 4 types.

D = computerised Data
M = Manual, e.g. filing cabinet.
T = Transient data file, e.g. temporary program file
T(M) = Transient Manual, e.g. in-tray, mail box.


As with external entities, it is common practice to have duplicates of data stores to make a diagram less cluttered.

REFERENCES:

http://www.cems.uwe.ac.uk/~tdrewry/dfds.htm

SAD1 - Assignment 9


A data flow diagram models the system as a network of functional processes and its data. It documents the system’s processes, data stores, flows which carry data, and terminators which are the external entities with which the system communicates.

































SAD1 - Assignment 8


An activity diagram is a UML diagram that is used to model a process. It models the actions (or behaviors) performed by the components of a business process or IT system, the order in which the actions take place, and the conditions that coordinate the actions in a specific order. Activity diagrams use swim lanes to group actions together. Actions can be grouped by the actor performing the action or by the distinct business process or system that is performing the action.


































































































































MIS2 - Assignment 9

The existing models of information technology (IT) acceptance were developed with the concept of the static individual computing environment in mind. As such, in today's rapidly changing IT environment, they do not serve as adequate indicators of an individual's IT usage behavior.

“The rate and magnitude of change are rapidly outpacing the complex of theories -- economic, social, and philosophical - - on which public and private decisions are based. To the extent that we continue to view the world from the perspective of an earlier, vanishing age, we will continue to misunderstand the developments surrounding the transition to an information society, be unable to realize the full economic and social potential of this revolutionary technology, and risk making some very serious mistakes as reality and the theories we use to interpret it continue to diverge." - Cordell (1987)

The three changes that likely to have substantial impact on USEP in the next three years are the following:

1. Electronic Processing of all services

Electronic Data Processing (EDP) can refer to the use of automated methods to process commercial data. Typically, this uses relatively simple, repetitive activities to process large volumes of similar information. For example: stock updates applied to an inventory, banking transactions applied to account and customer master files, booking and ticketing transactions to an airline's reservation system, billing for utility services. Its advantages are: speed, it operates the speed of electric flow which is measured in billions and trillionth of a second. It is faster than any other machine designed to do similar works; accuracy, high speed processing by computer is accompanied by high accuracy results the electronic circuitry of computer is such that, when the machine are programmed correctly and when incoming data is error free, the accuracy of the output is relatively assured; automatic operation, an electronic computer can carry out sequence of many data processing operations without human interaction, the various operations are executed by way of a stored computer program; decision making capability, a computer can perform certain decision instruction automatically; compact storage, electronic data processing system have the ability to store large amounts of data in compact and easily retrievable form; discipline imposes, to solve problem-with computer you must, first understand the problem, second, program the computer to give you right answers. Understand a problem is one thing but understanding it to the depth of detail and insight required to program the computer is a completely different matter.

2. Virtual Learning

A virtual learning environment (VLE) is a set of teaching and learning tools designed to enhance a student's learning experience by including computers and the Internet in the learning process. The principal components of a VLE package include curriculum mapping (breaking curriculum into sections that can be assigned and assessed), student tracking, online support for both teacher and student, electronic communication (e-mail, threaded discussions, chat, Web publishing), and Internet links to outside curriculum resources. Its advantages are: learning without any restriction as to time or space; courses based on modules with flexible time schemes, which take individual learning needs into account; and greater responsibility taken by students in the learning process.

3. RFID

RFID stands for Radio-Frequency IDentification. The acronym refers to small electronic devices that consist of a small chip and an antenna. The chip typically is capable of carrying 2,000 bytes of data or less. The RFID device serves the same purpose as a bar code or a magnetic strip on the back of a credit card or ATM card; it provides a unique identifier for that object. And, just as a bar code or magnetic strip must be scanned to get the information, the RFID device must be scanned to retrieve the identifying information. Its advantages are: RFID tags are very simple to install/inject inside the body of animals, thus helping to keep a track on them. This is useful in animal husbandry and on poultry farms. The installed RFID tags give information about the age, vaccinations and health of the animals; RFID technology is better than bar codes as it cannot be easily replicated and therefore, it increases the security of the product.; Supply chain management forms the major part of retail business and RFID systems play a key role by managing updates of stocks, transportation and logistics of the product; Barcode scanners have repeatedly failed in providing security to gems and jewelries in shops. But nowadays, RFID tags are placed inside jewelry items and an alarm is installed at the exit doors; The RFID tags can store data up to 2 KB whereas, the bar code has the ability to read just 10-12 digits.

REFERENCES:

http://wiki.answers.com/Q/Advantages_of_electronic_data_processing
http://whatis.techtarget.com/definition/0,,sid9_gci866691,00.html
http://www.friends-partners.org/utsumi/Global_University/Global%20University%20System/Tapio%27s_Slides_Virtual_Learning/tsld008.htm
http://www.technovelgy.com/ct/Technology-Article.asp?ArtNum=1

;;

Template by:
Free Blog Templates