Monday, January 27, 2014

Academic Research applying Multivariate Research Methods using Exploratory and Confirmatory Factor Analysis and Structural Equation Modeling

Keywords applicable to this article: multivariate, research, statistical modeling, structural framework, causal relationships, model reliability, model validity, exploratory factor analysis, confirmatory factor analysis, structural equation modeling

A research problem may be univariate, bivariate, or multivariate. A univariate problem is concerned with only one research variable and a bivariate problem is concerned with linearity of relationship between two research variables. Normally, univariate problems comprise of study of multiple and independent research variables without bothering about their quantitative mutual relationships. For example, a single research may incorporate study of attitude, organizational commitment, and employee performance separately in a fast food chain without bothering about their quantitative relationships. Some researchers may design triangulation studies by collecting numerical data about the three variables but establishing their interrelationships qualitatively.

On the other hand, bivariate research problems incorporate study of relationships between two variables by establishing a null and an alternate hypothesis. Most bivariate research problems are concerned with mutual relationships between two variables investigated through multiple independent hypotheses. However, the hypotheses may not be interrelated in the form of a structure or theoretical framework. The hypotheses may be tested using bivariate techniques, like correlation analysis, regression analysis, analysis of variance, students’ t-test, Chi-square test, or simply the p-value testing. The outcomes may be definitive causal relationships (influence of an independent variable on a dependent variable) or simply a reflection of how a parameter varies with respect to another within a controlled research setting. Normally, establishing a relationship between two variables does not guarantee that a causal relationship is found. Cause-effect relationships can be established by taking support from established theories or by investigating more variables in action influencing the two variables. This is where multivariate problems come in the picture.

Multivariate problems are different and complex, requiring sophisticated techniques for investigating relationships among multiple variables. Most of the multivariate problems require investigation of complex structures than mere relationships. Hence, applying statistics in multivariate problems is not only about statistical calculations albeit involves complex statistical modeling. A model may be in the form of a theoretical framework or an initial measurement model. Before the multivariate techniques are discussed, it is important to differentiate between a theoretical framework and an initial measurement model.

A theoretical framework is formed by conducting intensive literature review and creating a structure having relationships grounded on theories. On the other hand, an initial measurement model can be established using the principal component analysis technique employing orthogonal factor rotation.

Technically, the models created following both the approaches are considered as an initial model and is taken through the same reliability, validity, and model fitment tests. However, the research studies involving theory-based formation of the initial model (commonly referred to as the theoretical framework) are confirmatory or extended studies whereas the research studies involving principal factor analysis technique are exploratory studies. In practice, a theory-based modeling approach should be chosen if the model can be grounded on an extensive and deep theoretical foundation, whereas the principal component analysis technique should be chosen if the model is not sufficiently supported by theories.

Multivariate problems have two flavours – relationships among multiple observable measurable) variables or relationships between single or multiple groups of observable variables and a group latent (unobservable, or immeasurable) variables. The latter is used in highly complex research studies. The sequence of techniques used in multivariate statistical modeling are – exploratory factor analysis, confirmatory factor analysis, and structured
equation modeling. The exploratory factor analysis technique may be skipped if theory-based initial modeling has been preferred. In the exploratory factor analysis, the number of latent (unobserved) variables influenced by a set of observed variables is explored by obtaining an orthogonal factor rotated solution using VARIMAX, QURTIMAX, EQUAMAX, PROMAX, and DIRECT OBLIMIN rotationmethods. The most used orthogonal factor rotation method is VARIMAX. The number of latent variables is determined by the number of rotated variables having an Eigen-value above unity. The researcher may predetermine the number of latent variables or simply proceed to investigate the variables having Eigen-values more than unity. It is imperative to keep the number of latent variables lesser than the number of variables having Eigen-values more than unity. This analysis is done on a Scree plot.


The rotated factor table obtained after rotation is of prime importance. It gives the level of loading by each observed variable on each latent variable. Normally, variables with significant loadings are selected and the rest rejected. The significance of loadings is determined by the loading value (should be normally at 5.0 or greater) or the importance of the observed variable in the reliability test. The researcher may like to name each latent variable by analyzing the group of observed variables loading them, or by taking help of literatures. Each group forms a scale representing the corresponding latent variables. The researcher may like to test the reliability of each scale using Cronbach Alpha, Split Half, Guttman, Parallel, or Strict Parallel techniques. In Cronbach Alpha test, an alpha value of 6 or greater is considered as a good reliability indicator for a scale if the research involves responses from human subjects (example, phenomenology and grounded theory studies). However, researchers prefer to choose a higher alpha value in scientific and technology-based research studies in which, the primary data is collected from experiments or simulations. It is normally observed that an observed variable having a high loading on the latent variable is a good contributor to the Cronbach Alpha value. However, sometimes an observed variable with low levels of loading (below 5.0) may appear to be a better contributor to the Cronbach Alpha value. The contribution of observed variables to the Cronbach Alpha value of the scale can be determined from a table called "scale if item deleted". In some research studies, the researcher may decide to conclude the research if very high reliability values of the scales are achieved. However, it is not guaranteed that these scales comprising groups of highest loading observed variables are the causal factors influencing the latent variables. It is recommended that a few validity tests are also conducted. This is where the confirmatory factor analysis technique is useful.

The confirmatory factor analysis technique helps in running validity tests on the model determined either through theory-based approach or through exploratory factor analysis technique. It involves computation of Average Variance Extracted (AVE), Cronbach Alpha, Degrees of Freedom, Root Mean Square Error of Approximation (RMSEA), Root Mean Square Residual (RMR), and Standardized Root Mean Square Residual (SRMR) values.
There are thresholds recommended by various research scholars based on the research area, and sample size for determining validity of the model.

One should be careful about deciding the thresholds before validating the model. If the objective is to simply validate the initial model, the researcher may conclude the research at this stage. However, there can be situations when the initial model returns unreliable scales and invalid relationships. This is unlikely if the initial model has been constructed with utmost care. But the researcher should be ready to face surprises and should not panic because the Structural Equation Modeling technique will come for rescuing the research from a probable failure.

Structural Equation Modeling helps in finding an alternate model having acceptable reliability and validity scores if the initial model has failed due to some unavoidable and irreparable issues. The technique allows the researcher to test multiple models by varying the relationships among variables and finally choose the best fit model. The test statistics that help in choosing the best fit model are goodness of fitment, adjusted goodness of fitment, normed fitment index, non-normed fitment index, comparative fitment index, parsimony fitment index, and incremental fitment index. It should be noted that all of these are not suitable for every research. The researcher should choose the most appropriate ones depending upon the area of research and the sample size. It is recommended to study a number of literatures for choosing the most appropriate fitment indices in structural equation modeling.

The recommended tool for applying exploratory factor analysis technique is SPSS, and the tool recommended for confirmatory factor analysis and structural equation modeling is LISREL. If you need any help in designing a research, collecting data, applying techniques for data analysis, and deriving meaningful conclusions and recommendations in a multivariate research involving exploratory factor analysis, confirmatory factor analysis, and structural equation modeling, you may please contact us at and We recommend using Survey Monkey for collecting data and latest academic versions of SPSS and LISREL for applying thes techniques. The academic version of LISREL cannot be used if the number of variables is greater than 15. However, in most cases the number of variables can be reduced to 15 or lesser if Principal Component Analysis technique has been used and reliable scales constructed by testing their Cronbach Alpha values. This is another advantage of starting the research with exploratory factor analysis rather than theory-based structural framework. In some research studies, it may not be possible to keep the number of variables below 15. In such cases, it is recommended that a professional copy of LISREL is purchased.

Ideally, the number of variables should be kept as low as possible especially if the sample size is smaller (say, less than 100). Higher the number of variables, greater is the difficulty in determining the best fit model employing Structural Equation Modeling. It is observed that most of the modern causal research problems require application of multivariate techniques and hence, it is recommended to master SPSS and LISREL in this context. 

We can support multivariate research studies in all the research areas mentioned on the page detailing our
Subject areas of specialization. The choice of factors and latent variables may be chosen as per a problem description. Typically, latent variables are the ones
that cannot be measured directly. Examples are: human attitude, human feelings, commitment to the organisation, willingness to work in a particular field, and behavioural aspects in groups or teams. However, the variables lacking data availability because of lack of systems and processes can also be chosen as latent variables. The factors influencing the chosen latent variables under study may be chosen from past research studies, journal articles, professional studies, industrial reports, press releases, and expert advises. The structure of the theoretical framework may be designed by applying the exploratory factor analysis technique, or by designing based on literature reviews providing adequate information on structural models
involving the factors (observed variables) and the latent variables under study.

Some of the examples of multivariate problems are the following:

(a) Influence of organisational citizenship behaviour, organisational commitment, behavioural aspects with peers and superiors, and willingness to participate on effectiveness of information security governance in an organisation
(b) Influence of organisational citizenship behaviour, organisational commitment, behavioural aspects with peers and superiors, and willingness to participate on project performance
(c) Influence of multiple personality types on effectiveness of crisis management decision-making and change management

In the above examples, the influencing variables are unobservable and hence need to be considered as latent variables. In order to measure them, the factors affecting them need to be taken from literatures. The models will comprise of relationships of the following generalised form:

Factor groups ---> Latent variables ---> Output variables

The factor groups representing each latent variable are the scales with high reliability (Cronbach Alpha value of 6 or more). The scales can obtained from exploratory factor analysis (principal component analysis with a rotated solution) or literature-supported groups. The number of latent variables loaded by factor variables depend upon the number of Eigen Values greater than unity in the set. After rotation (like, VARIMAX with Kaiser's normalization), the factor variables regroup under the latent variables with varying levels of loadings. The significant loadings (like, 0.5 or above) are accepted and the rest are rejected. This results in reduced scales per latent variable, which can be tested using Cronbach Alpha or spli-half testing. The scale may reduce further if deleting a factor improves in the value of Cronbach Alpha, provided a negative error covariance does not crop up. The researcher should also try to retain the factors strongly supported by theories at the cost of keeping a low reliability level (Cronbach Alpha value) of the scale. The rest of the analysis can be completed through confirmatory factor analysis and structural equation modeling.

Please contact us at or to
discuss your topic or to get ideas about new topics pertaining to your subject area.


Wednesday, January 18, 2012

Cloud Computing Security - A rapidly emerging area for dissertation and thesis research projects

Cloud computing security is a rapidly emerging research area amidst growing security concerns among the companies availing cloud hosting services for their critical IT systems. The virtual closed user group (V-CUG) mode of cloud computing operation, upon a massive shared real IT infrastructure shared among thousands of clients, is not yet well understood in the academic and even in the professional worlds. There are many unanswered questions because a direct analogy with self hosted infrastructure systems is not yet established. Regulators across the world are facing tough challenges in allowing the companies to host their critical IT infrastructures on cloud computing platforms. Protection of user sessions from the threats on the Internet takes us back to the old era of Zone based Firewall security system which was solved by establishing the Public, Secured and De-Militarised zones. Intrusion Detection and Prevention systems extended added advantages to the Zone based Security System. However, cloud computing hosting requires the user sessions to traverse the Internet. Then where does the Zone based Security comes in picture? If this is the only way to access the cloud hosted resources, then what is the solution for secured access to cloud computing resources? Assuming that IP-VPN tunneling using IKE with IPSec and 3DES/AES encryption is the solution to protecting Internet exposed user sessions, how many tunnels will the cloud hosting providers terminate at their end? Which VPN aggregator can support millions of tunnels? What will be the WAN overload? What will be the performance? Is it really feasible having millions of IP-VPN tunnels to secure cloud computing clients? Please keep in consideration that this is just one area of security because the issues of Server operating systems, LAN, applications, web services, platforms, etc. security at the cloud hosting end is still unaddressed. What are service providers doing to ensure that one client do not get even accidental access to the data of another client?

Let us begin with the fundamentals. Cloud computing infrastructures employ the same IT components that corporations have been using in their self hosted infrastructures. However, clouds are deployed at massive scales with virtualisation as their core technology. The security threats and vulnerabilities are the same that the world has been witnessing in self hosted real and virtual infrastructures. In self hosted environments, corporations have kept themselves secured by operating within CUG (Closed User Group) environments, which are protected from the external world through peripheral devices like Zone based Firewalls, Intrusion Prevention Systems, Network Admission Control, Anomaly Control, Antivirus/Antispyware, etc. All users in the CUG go through an organized authorization system to achieve privilege levels on the secured computers, and their activities are logged and monitored. In cloud hosted scenario, the CUG breaks completely. In fact there is no real CUG - as it becomes virtual. The sessions between users and servers, that were highly protected on private IP addresses on CUG LANs, get exposed to public IP addresses of the Internet. The security controls are out of the hands of the end customers, as the service providers own the clouds. The end user files and data gets spread across multiple physical hosts, with no identifiers determining the location of a component of a file/folder and its data. The service providers, on the other hand, use real components for the entire cloud and only virtual components for the end customers. Hence, personalisation becomes a major problem, because there is nothing real; everything is just virtual everywhere - the authentications, authorizations, accounting, file locations, database locations, sessions, application demands, servers, etc. The end users get virtual screens to manage their so called personalised cloudlet on a massive cloud infrastructure.

The challenge is related to going back to the olden days of security controls, prevalent in real CUG environments, and implement them on the virtual CUG environment . In your study, you can pick one of the prominent security challenges - like access control, network control, de-militarized zones, web services control, file/folder security controls, etc. In fact, you should prefer to choose an area that can be simulated on a network modelling and simulation platform - like OPNET, Cisco Packet Tracer, OMNET++, etc. Do not try to address more than one areas in your dissertation/thesis project, because your study would tend to get generalised. I propose that you should study the following areas in your dissertation/thesis project about Cloud Computing Security:

You may like to study data security services in Cloud Computing environments. Data Security services in cloud computing is still mystery for the customers although service providers have implemented all standard technologies that you can imagine: stateful inspection firewalls, Intrusion Detection and Prevention devices, Web services firewalls, Application firewalls, Spam filters, Antivirus, Anti-Spyware, Gateway Level File Inspections, etc. But customers are not able to specifically identify the controls applicable on their files/folders because they do not know the physical location of them (as you must be knowing, files get distributed into multiple virtual machines spread across multiple data centres). In this context, a new concept is evolving. It is called "Unified Threat Management System (UTM System)". In UTM, a separate service provider builds a lot of controls for the customers that can be shared through "subscription model" (similar to the cloud computing model) and can assure security for the customer’s assets by seamlessly integrating their UTM solutions with the Cloud hosting service providers. The customer just needs to buy a leased line connection to the UTM provider and will get all the controls applicable on their hosted environments. The model appears like the following:

Currently, cloud computing service providers are operating in three different modes - Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). However, a fourth mode is emerging rapidly to provide security solutions on cloud computing infrastructures - Unified Threat Management as a Service (UTMaaS). Unified threat management (UTM) service for cloud hosting users is a rapidly emerging concept in which, the security controls for the end users are managed by a third party, that allow the user sessions from thousands of clients through their systems and ensure optimum protection and personalization. Their services span from network security controls to application security controls. Cloud hosting customers may need a Leased Circuit Connection to the UTM provider, that serves as a backhaul connection to the Cloud Hosting provider with appropriate peering between the security controls and the infrastructure maintained by the cloud provider (at all levels of the OSI seven layers) and the corresponding client environment for the customers.

I will give you an example. When you hire E-Mail services from Google Apps or any other cloud hosted application service provider, you get a control panel screen through which you can maintain the mailboxes for your company. All the configurations can be triggered through icons. There will be separate icons through which you can configure your own security controls, specific to your own subscription only. Some examples of the icons are - Account Level Filtering, User Level Filtering, E-Mail Authentication, Spam Assassin, SSL configuration panel, etc. Every cloud hosting user that maintains a secured business on the Internet is aware of these icons. These are security controls specific to a company (virtual closed user group), - but this doesn't mean that the cloud hosting provider has installed any dedicated security device for the company. These devices work in shared mode for thousands of companies that have hosted their services on the same cloud. In fact the cloud hosting provider has implemented additional configurations to provide dedicated services to cloud subscribers. Let us take an example of E-Mail Authentication. Guess what they would have implemented? - just an LDAP Server!! What is there in an LDAP server? - User Accounts, Group Accounts, Authorizations, Privileges, etc!! Where are the privileges and authorizations configured? - on network objects (files, folders, databases, Mail boxes, etc.)!! Now what they have added on the cloud? They have added a method to ensure that a company's domain account has become a network object for them. How will this happen? They have created customized Web Services on E-Mail Servers (like MS Exchange, Q-Mail, or Send mail) in such a way that each server can host mailboxes for multiple domains, and there can be a super user who is the owner of the domain and all mailboxes under it. To provide privileges to the super user, they have integrated the LDAP server with the customized mail server through appropriate web programming such that the LDAP server recognizes the domain as the network object and the super user as its owner. This customizing also results in a combined administration panel for both e-mail server and the LDAP server, to enable the user company to implement their own security controls. Similar settings can be implemented for other user services as well. Given the huge volumes, these security applications (LDAP, Spam filter, IPS, Web Services Firewalls, etc.) are massive and hence a Unified Threat Management (UTM) service provider is needed to work closely with the cloud hosting service provider.
Cloud computing hosting can be viewed as external virtualization, which is an extended IT infrastructure for companies that are geographically dispersed. You may like to study how the principles of IT security management, IT governance, and IT service continuity can be fulfilled by keeping some part of IT services internal and other services extended to multiple Cloud service providers. To gauge the principles, you may need help from some global standards and best practices as listed below:
There are many frameworks that deal with these concepts:
(a) ISO 27001/27002 - Information Security (this is related to IT Risk Management as well with build in controls for IT Business Continuity and Disaster recovery)
(b) ISO 27005, COBIT, RISK IT - IT Risk Management
(c) Val IT - Value proposition to Business by IT (includes IT Service Continuity)
(d) ITIL Versions 2 and 3 - IT Service Continuity is an integral part of overall Service Management Framework
(e) PAS 77 - dedicated standard for IT Service Continuity Management
(f) ISO 24762:2008 - dedicated standard for ICT Disaster Recovery Services
Your topics may comprise of these frameworks combined with actual security controls possible on cloud hosting, through UTM service providers or otherwise. The studies may be carried out by studying various security attributes by modelling and simulating them on appropriate network modelling tools (OPNET, Cisco Packet Tracer, OMNET++, etc.), or by conducting surveys and interviews of experienced IT professionals that are managing cloud hosted services for their end users. Please contact us at or to discuss your interest area in cloud computing security. We will help you to formulate appropriate topics, their descriptions, and your research aims and objectives, supported by most relevant literatures. We have helped many students in completing their research projects on IT security and IT governance on cloud computing. There are no dearth of topics as this is an emerging field that is actively targeted for academic research studies. However, it should be kept in mind that the research studies in this field should yield firm and actionable outcomes, in the form of IT security strategies, IT governance strategies, architectures and designs for the end users of Cloud Computing Hosting and for the service providers that are still struggling to convince the global regulators that cloud computing security is in no way inferior to traditional self hosted IT infrastructure security. The standards and global best practices (listed above) can definitely add value, although the implementation plans for cloud hosting end user companies should evolve from academic research studies.

Please view the research areas of ETCO India at: and the research topics delivered at 

Wednesday, November 23, 2011

Topics for Dissertations and Thesis Research Projects in Procurement Management, Supply Chain Management, Inventory Management, and Distribution Management

Keywords applicable to this article: supply chain components, inventory, logistics, supply chain network design, transportation network design, distribution network design, warehousing, depot management, push and pull supply chain, supply chain efficiency and effectiveness, Porter’s value chain, supply chain performance drivers, demand forecasting, aggregation planning, economies of scale, supply chain risk management, global supply chains, IT management in supply chains, E-supply chains, Lean Six Sigma in supply chain, sustainable supply chain.

Supply Chain Management is one such area that will never have dearth of research topics for dissertation and thesis projects. This is because the global business framework is changing very rapidly due to the challenges posed by globalization, which directly affects supply chain design and management by an organization. Environmental issues, rising oil prices, increase of carbon footprints, rising tariffs, rising threats in international waters and air cargo, increasing supply chain risks, high competition, rising customers’ expectations, etc. are significant challenges facing modern supply chain managers that are already under pressure to reduce lead times in every step of supply chain management. Modern supply chain practices need to be highly proactive, horizontally integrated, information driven, network based, and technology enabled. These challenges are rapidly eliminating the old beliefs and practices giving way to new ways of managing the components of supply chain. The core elements of supply chain – procurement management, production management, inventory management, distribution management, and retail management can no longer operate as distinct verticals but need to be integrated horizontally with the help of accurate and timely information management and flow, synchronous activities, effective coordination, decision-making power at lower levels, better economies of scale, elimination of wastes, increased reliability on actual demands (than demand forecasting), organization wide cost reduction targets and excellent service delivery. In this context, I hereby present some of the key areas where the students may like to conduct their research studies:

(A)  Function Integration of Procurement, Production, Inventory, Distribution, and Inventory Management: In modern supply chains, organizations are giving high emphasis on horizontal integration of supply chain components by breaking all the traditional functional barriers that have existed since the concept was born. Modern supply chain agents integrate effectively by sharing timely and accurate information with everyone in very transparent manner. For example, if the supply chain has multiple inventory points, the procurement manager may have access to daily, or even hourly, updates of the inventory levels at all the points. Functional integration is evident even with suppliers and customers. The systems like automatic reordering by an IT enabled system at fixed pre-negotiated prices whenever inventory levels dip below the reorder points, continuous flow of consumption information upstream and shipping information downstream between the endpoints, supplier managed inventory at customer premises, exact and timely flow of actual demand information reducing the need for demand forecasting, etc. are no longer just empirical theories in the dreams o0f academicians. I suggest that students may like to undertake academic research studies on how supply chain integration is carried out by modern companies by conducting on-field surveys and interviews. The studies can be conducted on a particular company or on the entire supply network of a commodity.

(B)  Supply Chain Network Design: The concept of network design is rapidly gaining popularity in supply chain management. In fact, many modern scholars are talking about renaming “Supply Chain Management” to “Supply Network Management”. This is because companies no longer just manage multi-tier suppliers in the form of chains but rather manage a whole network of suppliers for every key purchase. The concept of supply network has evolved as a result of globalization and rapid growth of Internet leading to reduced gaps between suppliers and buyers of the world. The network design concepts are applied in the areas of transportation, distribution, and retailing. The actual design depends upon the supply chain strategy, scope, cost, risks and uncertainties, and demand information. The key design considerations in network design are – nodes and links, direct shipments, milk runs, in-transit mergers, domestic transit routes, international transit routes, last mile transit routes, locations of depots, warehouses, distributor storage, retail outlets, and risks related to each node and link. The key factors that need to be taken into account are – strategic factors, technological factors, macroeconomic factors, political factors, infrastructure factors, competitive factors, socioeconomic factors, localization, response time expectations (of customers), facility costs, and logistics costs. In my view, network design in supply chain management has ample opportunities for conducting academic studies for students and professionals. The studies will be based more in interviews because the students will need to learn from specialist network designers in supply chains.

(C)  Pull Supply Chain Strategy: It is almost official now that the world is drifting towards pull supply chain strategy. Now the business houses are focusing more on gaining exact demand information rather than depending upon demand forecasts. The companies have already faced significant problems due to high inventory costs and wastage of unconsumed products in light of forecast inaccuracy. However, it may be noted that pull supply chain strategy is not as straightforward as push strategy. The strategists no longer have the leverage to just depend upon demand models, viewed as magic wands in the past, but are required to proactively collect actual demand information. This change requires effective integration with suppliers and buyers, and large scale information sharing through sophisticated information systems. The companies need to think much beyond Japanese Kanbans or lean strategies (even they have backfired, really!!). The students may like to study on what companies are doing or can do to shift to pull strategy as much as possible.

(D)  Supply Chain Efficiency and Effectiveness: Every organization spends significant amounts on supply chain management. Effective financial planning, cost control, timely service, high quality of service, and return on investments in supply chain are key drivers of efficiency and effectiveness. A number of metrics are taken as inputs to the strategic supply chain planning to ensure that optimum efficiency and effectiveness can be achieved. This research area may require on-site quantitative data collection, and quantitative analytics using SPSS and such other statistical analysis tools to arrive at the results. The students may have to discover independent and dependent variables and their correlations using descriptive and inferential statistical methods. Another research area in this field may be the Lean Six Sigma, which is a mix of Lean Supply Chain methods and Six Sigma tools. It is primarily targeted at eliminating wastes and improving supply chain efficiency. This is, however, a new research area and hence students may face shortage of references.

(E)  Supply Chain Integration: This research area may be taken as an extension of functional integration (point A). The student may like to study how companies are integrating with key suppliers and customers to improve flow of information about demands (upstream) and supply (downstream) and to reduce lead times. The modern concepts like direct delivery (from suppliers to customers), vendor managed inventories (VMI), cross-docking, optimal procurement policy, optimal manufacturing strategy, inventory minimization, input and output control, aggregation planning, process integration, real time monitoring and control, optimization of operations, supply chain object library, enterprise supply chain integration modeling, 3PL and 4PL, quick response (QR), efficient consumer response (ECR), continuous replenishment planning (CRP), and collaborative planning, forecasting, and replenishment (CPFR) are included in the scope of supply chain integration. The students may chose a particular area and conduct on-site interviews of supply chain experts about how these practices are incorporated by organizations in their supply chain integration strategies. The studies may be mostly qualitative.

(F)  Supply Chain Performance Drivers: The key performance drivers of supply chain management are – facility effectiveness, inventory effectiveness, transportation effectiveness, information effectiveness, sourcing effectiveness, pricing effectiveness, delivery effectiveness, quality effectiveness and service effectiveness. These drivers comprise multiple performance indicators that may be measured quantitatively by collecting data and applying them in SPSS. The studies in this area may primarily be quantitative with descriptive statistical analysis. In modern world, sustainable supply chain management to support the triple bottom-line (equity, environment, and economy)is also included in the scope of supply chain performance drivers. This is, however, a new research area and hence students may face shortage of references.

(G) Demand Forecasting: The concept of demand forecasting is diminishing as more and more companies are now focusing on getting accurate and timely demand information rather than depending upon forecasts. This is carried out by effective integration of information from all the nodes of the supply chain and disseminating upstream as well as downstream. However, there are many industries that will continue to depend upon push strategy and demand forecasting. The students may like to study about the drawbacks of traditional forecasting methods (like time series forecasting, moving averages, trend analysis, etc.) and the ways of improving forecasting accuracy. Many companies want to incorporate real time data in their forecasting models and focus on forecasting for shorter periods. This requires lots of additional knowledge over and above the traditional ways of working upon past demand data. The modern forecasting models may be based on accurate knowledge of customer segments, major factors that influence forecasting accuracy, information integration, bullwhip effect, scenario planning, simulations, external factors, risks, and causal (Fishbone or Ishikawa) analysis. Most of the studies may be qualitative or triangulated.

(H)  Aggregation Planning: Aggregation is carried out by a company to determine the levels of pricing, capacity, production, outsourcing, inventory, etc. during a specified period. Aggregation planning helps in consolidation of the internal and external stock keeping units (SKUs) within the decision and strategic framework for reducing costs, meeting demands and maximising profits. It may be viewed as the next step of either demand forecasting (push strategy) or demand information accumulation (pull strategy) for carrying out estimations of the inventory level, internal capacity levels, outsourced capacity levels, workforce levels, and production levels required in a specified time period. The students may like to conduct qualitative case studies to research about modern practices of aggregation planning in various industrial and retail sectors.

(I)   Global Supply Chains: In the modern world, suppliers in a country are facing direct competition from international suppliers as if the latter are operating within the country. This has happened due to modernization of information management and dissemination, supply routes, payment channels, electronic contracts, leading to improved reliability and reduced lead times of international suppliers. The students may like to undertake study on monitoring and management of global supply chains/networking by professionals working in MNCs.

(J)   E-Supply Chains: E-Supply Chains are linked with E-Businesses that use Internet as their medium for accepting orders and payments, and then using the physical channels to deliver the products. E-supply chain is an excellent example of pull strategy and short term demand forecasting. Information flow across the supply chain is instantaneous because both end points and the intermediate agents work through a single Internet enabled portal. E-Bay is viewed as one of the founders of this concept at global scales with built-in electronic contract signing and management, electronic payment processing, and electronic delivery processing. The students can find various case studies on E-Supply chains, although the empirical theories are still evolving. The research studies would be quite challenging, modern and unique, but poorly supported by literatures as the field is still evolving.

(K)  Supply Chain Risk Management: Supply chain risk management is gaining immense popularity due to globalization of competitive landscapes, and growing threats and uncertainty. Risk management in supply chains is directly linked with supply chain agility and hence it needs to be done in very organized and objective manner, incorporating quantitative models. Supply chain risk management is a novel dissertation/thesis research area based on the known and teething current problems in logistics/supply chain management. The root of the problems lie somewhere in the uncertainties in upstream as well as downstream flows of materials, funds, and information. For example, if there are errors in calculating economic order quantities (EOQ) and reorder levels, the ordering process may not synchronize well with the lead-times. On the other hand, the lead-times are uncertain due to various delay factors and fluctuation in costs if a transportation mode is changed. Holding inventory is the safest haven for logistics managers, but I am sure the top management of any organisation will never like it. The primary purpose of this subject matter is to keep lowest possible inventories while ensuring consistent, timely, and accurate supplies to the end users. The challenges are in the following areas:

(a) Lack of integration/synchronization/co-ordination
(b) Lack of appropriate quantitative models
(c) Lack of integrated information availability, even if the quantitative models are in place (i.e., the company has invested in SCM software tools)

The solution is somewhere in implementing an appropriate supply chain risk communication system. You will appreciate, supply chain risk is also a floating entity just like materials, funds and information. If the entire chain is integrated through an extranet portal system, and updates of every consignment code are uploaded periodically by all agents connected with the portal, there can be proactive risks generated by the software for the logistics managers such that they can take operating level, tactical level, and even strategic level mitigation actions. Although such a system is still in its conceptual stage, academic researchers can contribute to its overall conceptualisation and design. It may be integrated as a layer above the traditional SCM software. An agent sensing any variations in delay or cost may log a threat and its probability against a consignment code. The probability and impact levels may be fed to the logistics agents that can calculate the impact (like stock-out by a date). The outcome will be a risk value which will be escalated to an appropriate authority level, and appropriate mitigation action will be suggested. For example, if there is a temporary unrest in a country, the current consignments can be airlifted and subsequent orders placed to an alternate supplier.

I suggest that you may like to study the source of supply chain risks in a selected sample of transactions in your field and design a novel SCRC (supply chain risk communication) framework employing the ISO 31000, M-o-R, COSO, COBIT v5, and similar Enterprise Risk Management (ERM) frameworks for enterprise wide estimation and communication of risks. The key risks that you can target in your SCRM framework can be categorized as: disruptions, delays, forecast errors, procurement risks, supplier risks, lead time risks, receivable risks, capacity risks and inventory risks. You may collect a list of known supply chain threats in your area of interest, categorize them under one of these risk categories, judge the impact on business, judge the vulnerabilities, and arrive at the risk values using the quantitative formulations of the chosen model. Once the risk values are calculated, you may propose mitigation strategies pertaining to redundant suppliers, better supplier relationships (i.e., eliminating procurement hops), alternate routes (i.e., alternate loading/unloading ports and links), add capacity and inventory, shift warehouses, change distribution model (direct shipments, cyclic shipments, milk run shipments, in-transit merging, adding retail stores, cross-dock distribution, etc.), change transportation media, etc. You may validate the proposed SCRC framework by interviewing supply chain experts in your country. Hence, the problem statement of your thesis will be related to the known threats and vulnerabilities in supply chain management in the selected transactions (chosen by you), and the solution will be a novel Supply Chain risk communication framework to manage the risks resulting from these threats and vulnerabilities. It will be a quantitative research with descriptive and inferential statistical analysis.
The outcome of this model will be on-the-fly alerts on risk levels and their mitigation as soon as a risk is logged (you will need to define mitigation actions against various risk levels, and the suggested authorities to make decisions). You may like to validate your model by surveying experts in your network. A short, and to-the-point structured questionnaire may be used such that you can present validity and reliability analysis using SPSS.

(L)  Information Technology in Supply Chain Management: A number of information technology platforms are popular in supply chain management. Some of the key IT tools in supply chain management are IBM Supply Chain Simulator, Rhythm (by i2 Technologies), Advanced Planner and Optimizer by SAP, Manugistics, Matrix One, Oracle Supply Chain Management, etc. These tools possess various functionalities – like, enterprise planning, demand planning, production scheduling, distribution planning, procurement and replenishment planning, facilities location planning, replenishment planning, manufacturing planning, logistics strategy formulation, stocking levels planning, lead times planning, process costing, customer service planning, procurement, supply and transportation scheduling, global logistics management, constraint-Based master planning, demand management, material planning, network Design and optimization, supply chain analytics, transportation management, Vendor Managed Inventory (VMI) planning, continuous replenishment planning (CRP), and many more. The students may like to study about various IT systems and software tools for carrying out such activities in supply chain management. The studies may be primarily qualitative or triangulated.

Please view the research areas of ETCO India at: and the research topics delivered at

Saturday, October 9, 2010

Academic Research on Information Security Risk Management and Business Impact Analysis

Information Assets are very critical for success of modern IT enabled businesses. In the modern world, information assets are exposed to threats that are emerging almost daily. The threats to information assets result in "Risks" with potential impact to businesses. The potential damage against an impact classifies the "Criticality" of the Risk. The key to information Security of an organization is to know the assets, to know the threats to the assets, assess the probability and impacts to business, accurately measure the associated risks, and finally establish appropriate mitigation strategies to reduce, avoid or transfer the risks. I recommend that Information Risk Management should be an integral part of an organization's corporate governance such that adequate executive attention to the risks can be invited and mitigation strategies can be formulated. In many countries, it is a legal requirement if the organization is managing critical public systems or data.
To manage Information Risks it is mandatory to know ALL the critical information assets of the organization. Every system that creates, processes, transfers or stores information is an information asset - like, file/folders, databases, hard copy storage areas, desktops, laptops, shared network resources, employees' drawers/lockers, or the employees' own memory (tacit knowledge). The primary requirement of Risk Management is to have an "Information Asset Register" which is a secured database that needs to be updated regularly as and when new assets are added, modified or deleted.
Every organization can have their own definitions of "Confidentiality", "Integrity" and "Availability" parameters related to an Information Asset. These parameters should translate into metrics that should be assigned to EVERY critical information asset identified in the Information Asset Register. The outcome is known as an "Asset Value" tagged against every asset entered in the Asset Register.
The next important step is to assess the "Threat Value" by virtue of an in-depth analysis of the possible causes, the impact value (a function of multiple impacts like Financial or Reputational impact), and the probability of an impact. Every organization can have their own parameters for calculation of Threat Value because it largely depends upon the exposure factors (like Legal, Competition, Environmental, etc) that the organization is facing or can potentially face in future.
The subsequent step is to assess the "Loss Event Value" which is a function of the possible events of asset compromization that the organization can face. Again every organization can have their own loss event descriptions and the assessment methodology that are normally categorised under the known vulnerabilities in the organization.
The final step is to arrive at the "Risk Value" which is a function of the Asset Value, the Threat Value and the Loss Event Value. The calculation of Risk Value can be carried out differently for different organizations depending upon how many levels of escalation is feasible within the organization. Information Assets with high Risk Values have high "Vulnerabilities" and hence appropriate controls need to be applied urgently.
Business Impact Analysis is the next step after completion of the Risk Assessment. Risk Assessment process will ensure that all the Information Assets of the organization are identified and the corresponding "Risk Values" are assessed.
The scale of the Risk values can be defined depending upon the number of escalations feasible within an organization. A large organization may like to keep a larger scale of Risk Values leading to more levels of escalation such that minor risks are not un-necessarily escalated to senior levels. However, a small organization may like to implement smaller scale of Risk Values such that the visibility of risks to the senior/top management is better.
At every level of Risk, a mitigation strategy is mandatory. The mitigation strategy may include extra investments or extra precautions depending upon the potential Business Impact of the risk. Some organizations may like to accept the Risks up to a certain levels because the cost to mitigate the risk is higher than the business impact. Example, an organization may like to accept risks causing a financial impact of up to $500,000 because the cost of risk mitigation may be higher than this value. Such decisions are possible after thorough "Business Impact Analysis" in various round table discussions at the top management/board level. Please be aware that business impacts are different from the asset impacts that have been analysed during the risk assessment. Business impact analytics are applied to the entire business and not only to the information assets. These decisions are critical to ensure that an accurate investment plan can be approved such that the organization does not over-invest in low critical areas or under-invest in high critical areas.
The Business Impact Analysis should result in a list of Mitigation Actions that needs to be taken. Whenever an action is completed, the Risk Value can be "Normalized" to a lower value such that the impact is within acceptable limits. Examples of Mitigation actions are: addition of CCTV surveillance, better verification of visitors, visitors allowed up to visitor rooms only where CCTV cameras and microphones are installed, thorough analysis of surveillance data by security experts, offsite data storage, transition of backup tapes allowed in secured metallic boxes via Bonded Couriers, Backup system ensuring data encryption before writing on tapes, addition of clustering, fail-over, etc. to single Server installations, and so on.
Although such mitigation actions can always be accomplished to reduce the Risk Values, a sound approach of keeping Risk Values in control is to have a sound Information Security Management System (ISMS) within the organization supported by Disaster Recovery Strategy, Business Continuity Planning, Service Support & Service Delivery Processes.
Although a number of academic researches have been conducted on these areas, they are largely inadequate because these areas have evolved and grown many times faster than the pace of researches by academicians and students. I suggest that students should undertake new topics for dissertations and theses in these areas given that a lot remains unaddressed by the academic community in the fields of Information Security Risk Management and Business Impact Analysis and Management.
Please view the research areas of ETCO India at: and the research topics delivered at

Wednesday, September 29, 2010

IT Security, IT Services and IT Governance Frameworks

Suggested Topics for Dissertations and Thesis Research Projects in IT Security, IT Services and IT Governance Frameworks

Suggested Topics for Dissertations and Thesis Research Projects in IT Security, IT Services and IT Governance Frameworks

The fields of IT Security, IT Governance and IT Services Management are excellent grounds for academic researchers to undertake their dissertation and thesis research projects. The researches can result in very practical outcomes given that the standards, frameworks and best practices pertaining to these fields are widely implemented in organisations across the world.
Keywords: dissertation, research, topics, it security, it services, it governance, nist, iso 27005, iso 27002, iso 27001, cobit, itil, it risk management, information security, risk it, val it, computer security, incident management, problem management, change management, business continuity, disaster recovery, isms
This is the third article in the series of recommendations pertaining to dissertation and thesis topics from ETCO India. In the previous articles I have recommended various subject areas pertaining to latest challenges in the field of Wireless Communications, IT Systems and Global Computing. The dissertation/thesis projects in the fields of IT Security,  IT Services and IT Governance shall essentially comprise of studies on world class standards, frameworks and best practices that are widely accepted and implemented in organisations. Students may like to conduct case studies in organisations where these standards, frameworks and best practices are implemented or else conduct interviews or surveys among thousands of IT security professionals across the world that are connected via community groups on social networking websites (Like Linkedin, Plaxo, Google Groups, etc.). The culture of sharing knowledge in the world of IT security is excellent because the security controls, threat management and best practices can be established effectively by practicing organized knowledge sharing only. The IT security, services and governance consulting companies support academic researches whole heartedly to prepare the young minds for the future challenges such that the acute shortage of human capital in these fields can be addressed. In this article, I recommend the following standards and frameworks in which hundreds of topics pertaining to dissertations and thesis research projects can be developed.
 (a) NIST (US Department of Commerce) Recommendations: As per NIST recommendations, all the critical IT systems should be categorized at the first place such that the risks to these systems can to be identified, assessed and recorded. Thereafter, appropriate mitigation actions can be taken to reduce them to acceptable levels by either reducing the vulnerabilities (applying controls), by avoiding the risks (disallowing activities that can cause risks) or by transferring the risks to third parties (like outsourcing the controls to specialist security agencies). This entire process has been termed as IT Risk Management by NIST which is now regarded as the baseline for the industry. It requires management commitment and assignment of security roles to strategic business process owners in the organization. NIST recommends that the key roles that should contribute to IRM should be Senior Management, Chief Information Officer, System/Information owners, Business Managers, Functional Managers, IT Security Officers, Security Awareness Trainers, and Internal Auditors. The risk assessment recommended by NIST is a nine step structured analytics procedure that should be carried out by the key roles such that the outcome can be collated to form an organization wide risk registry.
(b) ISO 27005 Standard:  The ISO 27005:2008 is the formal replacement of ISO 13335-3 & ISO 13335-4:2000 which essentially recommends a 100% metrics based evaluation of all the steps of risk assessment described in ISO 13335-3 using quantitative techniques. This standard considers Risk Management, Configuration Management and Change Management as part of an integrated framework to deliver IT security in an organization. The risk management framework recommended by this standard can be viewed as a model comprising of "concentric spheres" with the information assets placed at the core of the model, vulnerabilities prevailing at the sphere above the core, controls applied over the vulnerability sphere and threats prevailing at the periphery of the model. This model was originally part of ISO 13335-3 that represents an environment of threats changing continuously thus changing the risk baselines (residual acceptable risk level) of the organizations. Hence, periodic assessment of the effectiveness of controls is required such that the vulnerabilities are not exploited by the emerging external or internal threats to affect the information assets.
(c) ISO 27002 Standard: The ISO 27002:2008 standard was formerly known as ISO 17799:2005 code of practice for information security that was used as the supplement document of ISO 27001:2005 standard which is the largest framework of standards describing Information Security implementation in an organization. The ISO 27002:2008 standard recommends the practices documented in ISO 13335-3 which essentially is a wider framework of Information Security because it covers the impacts in terms of confidentiality, integrity, availability, accountability, authenticity and reliability. Unlike "system characterization" recommended as the starting point by NIST, this standard recommends "asset characterization" as the starting point which includes tangibles as well as intangibles. The asset characterization is carried out by assuming that anything that is critical for the business to produce the products & services and retain customers as well as market share is treated as critical asset for the organization. It may be the systems (IT Systems, power systems, admin systems, etc.), people, documents, records, databases, applications, intellectual properties, etc. thus forming a much wider coverage of subjects on which the risks analysis needs to be carried out. The threat & vulnerability analysis is carried out employing steps that are similar to NIST recommendations but the impact analysis is carried out based on multiple business impacts categorized by the business stake holders – like financial loss, business loss, customer loss, market share loss, key people loss, premises loss, intellectual property breaches, regulatory breaches, productivity loss, inventory loss, etc. Protection against such losses is the direct interest of business stake holders and hence the topmost priority of the risk management teams. The final stages of risk analysis, control analysis, and control recommendations are similar to those of NIST recommendations. This framework also recommends periodic control effectiveness testing which is recommended by NIST in their special publication 800-115 released in 2008.
(d) The COBIT Framework: The COBIT (Control Objectives for Information and Related Technology) framework is developed by IT Governance Institute which is a community of expert developers and reviewers from IT governance field that have contributed to the framework to arrive at the best practices published in its current form. The IT Governance Institute comprises of board of trustees, IT governance committee, COBIT steering committee, advisory panel and affiliates & sponsors. The framework is a wonderful effort of putting together all the best practices of IT governance & Risk Management which organizations can adopt to support their Business Governance & Risk Management frameworks effectively. The COBIT framework helps in effective alignment of IT systems & processes with business requirements such that the business risks due to IT enablement can be effectively mitigated.
(e) CRAMM Framework: CRAMM is the Risk Management Methodology developed the Central Computing and Telecommunications Agency (CCTA) which is based on qualitative methods of risk analysis. In this mechanism the steps called “asset identification & valuation”, “identification & assessment of threat & vulnerability”, “identification of security measures”, “identification of risks” and “identification & assessment of risk mitigation” are carried out using structured questionnaire defined by the CRAMM framework. Each question has either “yes” or “no” answer and the scores are collated by counting the numbers of “yes” and “no” responses which is done automatically by the CRAMM system. If the target respondents of the CRAMM questionnaire are selected very carefully (like asset owners, IT administrators, application engineers, database administrators, etc), then CRAMM can result in accurate identification & mitigation strategies of IT risks.  
(f) OCTAVE Framework: OCTAVE is the abbreviation for “Operationally Critical Threat, Asset and Vulnerability Evaluation” which is a model developed by Carnegie Mellon University. This framework takes into account operational risk, security practices and technology and leverages the existing knowledge of vulnerabilities within an organization. The assessment is carried out in three phases – “development of asset based threat profiles”, “identification of infrastructure vulnerabilities” and “building security strategies & plans”.  The first phase requires an organizational view whereas second phase requires technological view. The OCTAVE assessment criteria is self driven without the need for external experts to guide the organization. Just like CRAMM it is a self guided process but is carried out by few experts in the company that have extensive knowledge of IT systems in the company whereas CRAMM is carried out by all asset owners of the company. One good aspect about OCTAVE is that it captures the knowledge of threats to business and internal weaknesses from the people at all levels and then uses the knowledge to develop the asset based threat profiles. This ensures that the risk assessment is very close to the people's perspective of threat exposures of the business and not based on some kind of threat database purchased from external consultants.
(g) FRAP Framework: Facilitated Risk Management Process (FRAP) is the framework which essentially takes into account prioritized threats and asset vulnerabilities that can potentially cause maximum damage to the business. This again is a qualitative approach and is popularly known as "four hour risk assessment". FRAP is not accepted by many organizations because the threat perceptions do not allow scaled down list of assets, threats and vulnerabilities to be addressed. However, this is an effective framework given that the 80-20 rule applies in risk management as well – i.e., 20% threats cause 80% of the damages.
(h) ITIL version 2 and version 3 Frameworks: ITIL versions 2 and 3 are publications by the Office of Government Commerce (OGC) UK. They are end to end IT service management frameworks that can effectively align the IT services of an organization to business requirements at the operations level. ITIL version 2 is very popular due to its wide implementation base across the world in many countries. It has two major disciplines – IT Service Support and IT Service Delivery. The IT Service Support discipline comprises of the Service desk function of an organization and five management functions – Incident management, Problem management, Change management, Release management and Configuration management. These management functions are also included in ISO 27001 and ISO 20000 standards as well as in COBIT framework. The IT Service delivery discipline comprises of five management functions as well – Service Level management, Capacity management, Availability management, IT Financials management and IT Business Continuity management.
The ITIL version 3 is much wider framework compared to ITIL version 2. It comprises of five disciplines as against two in the version 2: Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement. There are many new management functions included in ITIL version 3 in addition to the ten functions recommended by ITIL version 3. This is a new framework and hence the global roll out is evolving gradually. The students can find vast opportunities of research in both these areas in the form of Phenomenography or case studies.
(i) Val IT: This is the latest framework developed by IT Governance Institute that can be seamlessly integrated with the COBIT framework. This framework can be implemented to tangibly demonstrate the value of IT investments to the Business. This framework has not yet been researched by academic researchers and hence offers an entirely new world of opportunities.
(j) ISO 27001: This is the mother of all standards in Information Security Management System (ISMS). No standard possesses such wide coverage as offered by ISO 27001 in the field of IT Security. The purpose of ISO 27001:2005 is to guide an organization on the level of ISMS implementation feasible as per the business needs. It guides the organization to implement a structured Information Security Management System with an approach of Risk Assessment & Business Impact Analysis that incorporates world class best practices in management of the existing systems running in the organization in the form of a structured Framework. The Framework includes:
·      Adequately documented and implemented Security Policy(ies) and Procedures.
·      Asset Master comprising of ALL critical Information Assets.
·      Risk Assessment and Business Impact Analysis Worksheets.
·      Risk Treatments Plans and Reports.
·      ISMS Management and Operations Group with detailed roles.
·      ISMS Operating Manual with Statement of Applicability.
·      ISMS Operating Procedures, activity log-sheets and reports.
·      ISMS Security Procedures pertaining to every operating area.
·       Access Control Policies and Procedures for all the Information Processing and Storage Facilities.
·      Incident, Problem, Change, Release, Configuration, Capacity & Availability Policies and Procedures.
·      Detailed Implementation of the 133 Normative controls as defined in Annexure A of BS ISO/IEC 27001:2005.
·      Internal and External Audit Procedures, audit sheets and corrective/preventive actions.
·      Information Classification, Transit, Storage and Destruction Policies & Procedures.
·      Disaster Recovery Plan and Procedures.
·      Business Continuity Plan and Procedures.
Also, please see the topics on which we have delivered research papers: or