Random Thoughts..
Saturday, January 18, 2003
E-mail retention trends and challenges
Robert Mahowald and Mark Levitt of International Data Corporation (IDC)
By Mark Levitt and Robert Mahowald
Information overload fueled by rising volumes of e-mail and other electronic content is an unintended consequence of the Internet age. The real and perceived risks of uncontrolled information flows are driving otherwise rational organizations and individuals to an all-or-nothing approach. They either save everything indefinitely in case it might be needed in the future, or they quickly delete everything that is not absolutely critical to minimize storage and avoid having internal information that could be used in lawsuits.
As in most parts of life, the best approach is often the middle path, which would suggest intelligent, selective retention for an appropriate time with the help of archiving technology to automate the process and minimize reliance on human compliance. That will be the key for organizations to successfully retain not only e-mail but also other types of content for personal, corporate and legal objectives.
In dealing with the rising volume of e-mails and other electronic content that flow into our inboxes and across our screens every day, we are often left on our own to decide what to save and what to delete. The how and when to save content is also often accomplished without formal tools or processes. The result is individual judgment that, when multiplied across the tens, hundreds or thousands of workers in an organization, creates a hodgepodge of approaches whose inconsistency can pose costly problems for individuals and organizations.
De facto registry
Unlike in the early PC age when individual PC users generated and stored electronic information locally, in the Internet age of global networked computing, content often relates and is of interest to geographically dispersed individuals and organizations. Whether taken from a public Web site, e-mailed over the Internet, posted on a corporate portal or generated by a transactional system, electronic content is increasingly of interest to circles of individuals whose identities and interests may remain unknown until some point in the future.
The rise of e-mail usage has been the single most important factor driving the exchange of electronic content. E-mail communications between people within and across organizations around the globe have replaced many phone and face-to-face meetings. In addition, e-mail with its easy-to-use file attachment capabilities serves as an electronic content delivery service.
The e-mail inbox, as well as private and shared folders, has become the primary de facto registry for recording electronic conversations, agreements, customer interactions and other business-related activities. The need to be able to go back and locate business content contained in e-mails, and possibly nowhere else, explains why personal and corporate reference are the two most common reasons given by respondents for e-mail retention policies and practices. E-mails are also kept for other business purposes such as legal compliance, disaster recovery and industry practice.
With personal reference as the leading motivation for e-mail retention, it should not be surprising that the process of e-mail retention is often left in the hands of individuals who create a jumble of practices even within the same organization. In IDC's E-Mail Retention Survey, nearly half of respondents indicated that retention is handled in a decentralized manner at their organizations, either without a policy or with only an informal policy.
No pain, no policy
The lack of formal policies for centralized archives means that there is little reliability and predictability relating to what content is retained, how long content is retained and how content can be accessed. In addition, the overwhelming reliance on manual processes makes it difficult for organizations to ensure that content will be retained and available for organizational needs such as corporate reference and legal compliance. Even where there is an informal policy or practice, the lack of automated processes means that individual users must be motivated and able to identify and archive all of the relevant content for the appropriate period of time.
Despite the growing reliance on electronic content and the obvious benefits to having easy access to such content, many organizations do not expect much change in e-mail retention policies or practices. The situation presents challenges for e-mail archiving solution providers such as KVS, Legato, Scopeware, StorageTek, Tumbleweed and Zantaz. While the potential market demand for automated, centralized e-mail retention solutions remains largely untapped, without organizations implementing more formal retention policies, it will be harder to convince organizations to deploy centralized, automated retention solutions.
The lack of a pressing need and a feeling of pain within business units for better e-mail retention means that many IT departments will not be actively looking to deploy an enterprise e-mail retention solution. As long as organizations believe that current e-mail retention levels and approaches are meeting current corporate needs, e-mail archiving systems will risk being perceived as a solutions in need of a problem. The answer for archiving solutions vendors is to show organizations that current policies and practices fall far short of satisfying what retention objectives should be, in light of changing corporate and legal requirements and the availability of cost-efficient, e-mail archiving solutions that can operate automatically and invisibly.
.....................................................................................................................................................................................
Robert Mahowald is senior analyst, Collaborative Computing, with IDC (idc.com), e-mail rmahowald@idc.com, and Mark Levitt is research director, Collaborative Computing, e-mail mlevitt@idc.com.
Robert Mahowald and Mark Levitt of International Data Corporation (IDC)
By Mark Levitt and Robert Mahowald
Information overload fueled by rising volumes of e-mail and other electronic content is an unintended consequence of the Internet age. The real and perceived risks of uncontrolled information flows are driving otherwise rational organizations and individuals to an all-or-nothing approach. They either save everything indefinitely in case it might be needed in the future, or they quickly delete everything that is not absolutely critical to minimize storage and avoid having internal information that could be used in lawsuits.
As in most parts of life, the best approach is often the middle path, which would suggest intelligent, selective retention for an appropriate time with the help of archiving technology to automate the process and minimize reliance on human compliance. That will be the key for organizations to successfully retain not only e-mail but also other types of content for personal, corporate and legal objectives.
In dealing with the rising volume of e-mails and other electronic content that flow into our inboxes and across our screens every day, we are often left on our own to decide what to save and what to delete. The how and when to save content is also often accomplished without formal tools or processes. The result is individual judgment that, when multiplied across the tens, hundreds or thousands of workers in an organization, creates a hodgepodge of approaches whose inconsistency can pose costly problems for individuals and organizations.
De facto registry
Unlike in the early PC age when individual PC users generated and stored electronic information locally, in the Internet age of global networked computing, content often relates and is of interest to geographically dispersed individuals and organizations. Whether taken from a public Web site, e-mailed over the Internet, posted on a corporate portal or generated by a transactional system, electronic content is increasingly of interest to circles of individuals whose identities and interests may remain unknown until some point in the future.
The rise of e-mail usage has been the single most important factor driving the exchange of electronic content. E-mail communications between people within and across organizations around the globe have replaced many phone and face-to-face meetings. In addition, e-mail with its easy-to-use file attachment capabilities serves as an electronic content delivery service.
The e-mail inbox, as well as private and shared folders, has become the primary de facto registry for recording electronic conversations, agreements, customer interactions and other business-related activities. The need to be able to go back and locate business content contained in e-mails, and possibly nowhere else, explains why personal and corporate reference are the two most common reasons given by respondents for e-mail retention policies and practices. E-mails are also kept for other business purposes such as legal compliance, disaster recovery and industry practice.
With personal reference as the leading motivation for e-mail retention, it should not be surprising that the process of e-mail retention is often left in the hands of individuals who create a jumble of practices even within the same organization. In IDC's E-Mail Retention Survey, nearly half of respondents indicated that retention is handled in a decentralized manner at their organizations, either without a policy or with only an informal policy.
No pain, no policy
The lack of formal policies for centralized archives means that there is little reliability and predictability relating to what content is retained, how long content is retained and how content can be accessed. In addition, the overwhelming reliance on manual processes makes it difficult for organizations to ensure that content will be retained and available for organizational needs such as corporate reference and legal compliance. Even where there is an informal policy or practice, the lack of automated processes means that individual users must be motivated and able to identify and archive all of the relevant content for the appropriate period of time.
Despite the growing reliance on electronic content and the obvious benefits to having easy access to such content, many organizations do not expect much change in e-mail retention policies or practices. The situation presents challenges for e-mail archiving solution providers such as KVS, Legato, Scopeware, StorageTek, Tumbleweed and Zantaz. While the potential market demand for automated, centralized e-mail retention solutions remains largely untapped, without organizations implementing more formal retention policies, it will be harder to convince organizations to deploy centralized, automated retention solutions.
The lack of a pressing need and a feeling of pain within business units for better e-mail retention means that many IT departments will not be actively looking to deploy an enterprise e-mail retention solution. As long as organizations believe that current e-mail retention levels and approaches are meeting current corporate needs, e-mail archiving systems will risk being perceived as a solutions in need of a problem. The answer for archiving solutions vendors is to show organizations that current policies and practices fall far short of satisfying what retention objectives should be, in light of changing corporate and legal requirements and the availability of cost-efficient, e-mail archiving solutions that can operate automatically and invisibly.
.....................................................................................................................................................................................
Robert Mahowald is senior analyst, Collaborative Computing, with IDC (idc.com), e-mail rmahowald@idc.com, and Mark Levitt is research director, Collaborative Computing, e-mail mlevitt@idc.com.
Building the Bases of Knowledge
By Jennifer O'Herron , CallCenter
Jan 6, 2003 (10:12 AM)
URL: http://www.callcentermagazine.com/article/CCM20021223S0005
Word to the wise: Knowledge management software may offer more than you think it does, especially when companies are trying to do more with less.
As we become more of a Web culture, customers are increasingly accustomed to finding information on their own, provided they have the tools to do so. This is where knowledge management comes in.
An on-line knowledge base lets you provide customers and agents with fast and easy access to information about your company, products and services. A knowledge base can reduce the number of repetitive calls to agents, allowing them to concentrate on complex issues. And when complex issues do arise, agents have the knowledge on hand to quickly and accurately solve them.
The case studies we illustrate below involve companies that have made a significant investment in knowledge management software. Each company pursued specific goals and made careful considerations along the way.
For example, outsourcer Center Partners conducted scientifically validated trial groups to proficiently measure the benefits of knowledge management software. And the continuous help of integrators and consultants aided Cingular Wireless' implementation.
By gathering the necessary support from executives and employees, these companies were able to gain maximum value from the technology.
A (C)ingular Achievement
When your business involves 22 nationwide call centers employing more than 15,000 people, things tend to get complicated. This is something that Atlanta, GA-based Cingular Wireless knows all too well.
Back in December 2001, we spoke with Cingular executives about the challenges of staffing their recently consolidated call centers. We learned about their strategies for staffing up quickly and efficiently while maintaining a high level of quality. A little more than one year later we returned to find out about their latest challenge.
"Our business - as complex as it is - keeps getting more and more complex," says Steve Mullins, vice president - customer experience for Cingular Wireless.
Cingular's agents needed a user-friendly method for accessing answers to customers' frequently asked questions (FAQs) and troubleshooting procedures.
To that end, Cingular looked to similarly complex businesses in the computer industry, such as Dell and Microsoft. Mullins and colleague Monica Browning, Cingular's director - knowledge management, met with several knowledge management software vendors. They learned how the vendors' tools operate and they visited end users.
"We thought about how [the knowledge management software] would integrate with what we envisioned the future desktop to look like," says Mullins. "This system would be the foundation for what we use throughout all of our departments."
The company chose ServiceWare's (Edison, NJ) eService Suite. ServiceWare's decision integrity department helped Cingular put together a basis for proving the software's return on investment (ROI).
"Before we began to roll out the software, we made sure that we had support from the CIO all the way down," says Mullins. "We acquainted executives with the power of the tool. And we carried out a campaign to let employees know what to expect."
Cingular began implementation by rolling out eService Suite to its tech support department across three call centers. Cingular enlisted the help of consultancies Cap Gemini Ernst and Young (New York, NY) and Innovative Management Solutions (Moorestown, NJ).
"Initially populating the knowledge base was a combined effort between internal employees who are familiar with wireless features and services and an external authoring group from Innovative Management Solutions," says Browning.
Agents access the knowledge base on-line by entering a unique ID name and password. Depending on their user profiles, they view only information that's relevant to them.
To search the knowledge base, agents use natural language to state the particular issue or problem. The software matches their search against a list of authored issues. Agents then select the appropriate issue; the software presents potential resolutions.
"The software uses a complex algorithm to decide the order to list the issues based partly on the exact text and phrase matching," says Browning. "[The software] can also match synonyms and give extra weight to particular things. The more you give it, the easier it is for the system to provide the solution closest to your issue."
Agents can provide feedback by using the software's contribute button located on their toolbars. For example, if they follow a certain path to a correct solution and notice that it didn't mention an important step, agents click the contribute button. The software automatically records their steps and displays a screen with a field for them to enter their feedback. Cingular's knowledge management team can access agents' input and make the appropriate changes.
The knowledge management team consists of about 25 people who maintain the knowledge base full time. Members of the team are located in Cingular's Atlanta headquarters.
The team works with members of Cingular's different departments. For example, if there's a new product launch, the marketing department uses templates to provide the product's details to the knowledge management team. The team then collaborates with designated subject matter experts to put the appropriate information into a user-friendly format on the knowledge base.
To input all the data into the knowledge base - a major challenge - the company divided the process into phases. Browning estimates that it was about four months before the knowledge base was ready for the first group of users.
"We have about 80% to 90% of all the tech support info ready now and a good 70% to 80% of the general info," says Mullins. Next on Cingular's list is to input all of its rate plans, which is no easy task considering the large number of plans.
Cingular also expects to have the knowledge base ready for customers' use before the end of 2003. Customers will be able to access instructions for using wireless services and features, handsets and other devices that Cingular carries, plus troubleshooting tips. The knowledge base will be available to customers on-line and in Cingular's retail stores.
For the longer term, Cingular aims to provide the knowledge base's basic information through the company's IVR system. Using speech recognition, customers will be able to use natural language to search for basic queries.
Centering Around Knowledge
A lack of knowledge wasn't a problem for Fort Collins, Colorado-based outsourcer Center Partners. Instead, agents working on one client's account had a hard time managing all of the client's disparate info, which includes Web pages from the client and the client's vendor suppliers.
Because Center Partners had no control over the client's information they had to work with what they had. The main goal was to find a way to better organize information so that agents can access it faster and easier. The outsourcer was also striving to improve the quality of service that agents provide to its client's customers.
"Agents were frustrated because the knowledge wasn't organized in a way that would allow them to rapidly and efficiently get to it," says David Geiger, Center Partners' chief information officer. "Because of the cumbersome navigation, one of the biggest problems was that agents weren't using the information. [They] were instead trying to memorize a large number of details, which change daily."
After researching their options, Center Partners decided on TheBrain Technologies' (Santa Monica, CA) BrainEKP. As an overlay client system, the knowledge base allowed the company to better map client info without changing it.
However, Center Partners wasn't ready to accept knowledge management at face value. The outfit put together a trial group of 30 agents to test the software - half using BrainEKP and the other half not. Each group contained a scientifically validated sample of people with similar characteristics and skills. The results were striking: Within the group of agents using BrainEKP, quality scores improved 3.6%.
The outsourcer also had an eye on agents' average handle times (AHT), which executives feared might increase once agents began to use the new tool. However, they were pleasantly surprised when agents using BrainEKP saw a decrease in AHT of 43 seconds. Geiger says the test results have a statistical confidence level of between 95% and 97%.
"One of our huge key success factors was running the experiment," says Geiger. "We learned a lot from it. For example, in the third week we weren't seeing the results we thought we should. And [we] realized that we didn't give agents enough time to individually configure their own set of knowledge tools."
Individual customization is an important factor for Center Partners. "People index information differently," says Geiger. "TheBrain lets each individual agent customize his or her knowledge environment. For example, agents can take existing navigation and re-map it so that it's more aligned with their individual cognitive indexing."
Another example of customization is a feature that lets agents flag issues or topics they frequently encounter. These flagged issues appear as icons on their interface for easy access at all times.
Chris Kneeland, Center Partners' chief learning officer, emphasizes the high degree of collaboration between Center Partners' IT, Learning and Development, and Operations departments during implementation.
"You can have the best technology in the world but if you can't get it implemented and have people trained and actually using it, you can't make the benefits real," says Kneeland.
Center Partners developed a team of knowledge managers who are responsible for the software. Transforming the company's existing information into useful knowledge objects required the involvement of all three departments.
The knowledge managers are a cross-functional team of employees from the Operations and the Learning and Development teams.
Since information in the client's system changes constantly, TheBrain has helped Center Partners develop a system for flagging new information. And agents contribute to the knowledge base by clicking a feedback button on each page. This alerts knowledge managers to places where the knowledge is confusing or incomplete.
"Agents realize that they can contribute to improve the way we work," says Kneeland. "They're active participants, rather than just recipients of information."
Coaches use quality monitoring software from Verint to record agents' phone calls and capture on-screen work to ensure that agents use the software properly.
At press time, Center Partners was still rolling out the software to three of its seven call centers, where they plan to have about 250 of its 2,500 agents using the software. The company also hopes to extend the tool for external use.
"As we become better knowledge managers and continue to re-map knowledge, we can export this directly out to the customers," says Geiger.
"TheBrain transformed our knowledge base from 'data' to information that is useable and quickly accessible by every agent, [irrespective] of their skill or their particular familiarity with the system," he says.
We want to hear from you! Please e-mail: joherron@cmp.com
Custom-Made Knowledge
Most people eat during their lunch hour. Many expend the daily break to eat up calories at the gym. And a worthy few use the time to build world-class knowledge bases.
Falling into the last category is Mike Knapp, project manager with Crucial Technology's information systems department. He is also, you may recall, one of last month's Customer Care Leadership award-winners.
A chief reason we named Knapp Best IT Analyst was because of his innovative contributions to Crucial Technology, a division of memory-chip manufacturer Micron. Capping those contributions: development of Crucial's Memory Selector knowledge base.
Formed in 1996, Crucial is a direct sales channel for products that include memory upgrades, compact flash cards, multimedia cards and video cards. Customers can purchase memory upgrades by speaking with agents in Crucial's call centers in Meridian, ID, and East Kilbride, Scotland. Or they can order products over the Web.
Crucial offers more than 94,000 upgrades for more than 15,000 systems. That adds up to a lot of upgrades that customers can choose from, so orders can get quite complicated. Since customers needed to know their system's technical specifications to find compatible memory products, they were often at risk for selecting and ordering the wrong parts.
In 1997, Crucial's tech support department approached Knapp to devise a way to provide agents with the existing information in Crucial's database. Knapp worked during his lunch hour to create the first prototype of the Crucial Memory Selector.
The Memory Selector enables agents and on-line customers to view information about all of Crucial's PC, laptop, server and printer memory upgrades. By organizing systems based on their compatibility with Crucial's memory upgrades, the software ensures that customers make accurate purchasing decisions.
For example, when you visit Crucial's Web site, you enter your computer's make and model and the Memory Selector returns a list of all the Crucial memory upgrades that are compatible with your system.
"We take more than 90% of all our orders across our Web site," says Knapp. "Because of the Memory Selector customers are able to select their memory and place their own order without any intervention from agents."
Although customers can also purchase memory products on-line without using the Memory Selector, Crucial encourages customers to use the software by offering a 10% discount on purchases made over the Web; and by guaranteeing that purchases made through the Memory Selector will be 100% compatible with their system.
The Memory Selector has an added bonus. According to Knapp, the purchases with the lowest rate of return are those that are made through the software.
Our Apologies
In our feature on IVR and speech recognition software that appeared in the November 2002 issue of Call Center Magazine, we mistakenly referred to Intervoice's IVR software as Ombia on page 36. The correct name of the product is Omvia.
By Jennifer O'Herron , CallCenter
Jan 6, 2003 (10:12 AM)
URL: http://www.callcentermagazine.com/article/CCM20021223S0005
Word to the wise: Knowledge management software may offer more than you think it does, especially when companies are trying to do more with less.
As we become more of a Web culture, customers are increasingly accustomed to finding information on their own, provided they have the tools to do so. This is where knowledge management comes in.
An on-line knowledge base lets you provide customers and agents with fast and easy access to information about your company, products and services. A knowledge base can reduce the number of repetitive calls to agents, allowing them to concentrate on complex issues. And when complex issues do arise, agents have the knowledge on hand to quickly and accurately solve them.
The case studies we illustrate below involve companies that have made a significant investment in knowledge management software. Each company pursued specific goals and made careful considerations along the way.
For example, outsourcer Center Partners conducted scientifically validated trial groups to proficiently measure the benefits of knowledge management software. And the continuous help of integrators and consultants aided Cingular Wireless' implementation.
By gathering the necessary support from executives and employees, these companies were able to gain maximum value from the technology.
A (C)ingular Achievement
When your business involves 22 nationwide call centers employing more than 15,000 people, things tend to get complicated. This is something that Atlanta, GA-based Cingular Wireless knows all too well.
Back in December 2001, we spoke with Cingular executives about the challenges of staffing their recently consolidated call centers. We learned about their strategies for staffing up quickly and efficiently while maintaining a high level of quality. A little more than one year later we returned to find out about their latest challenge.
"Our business - as complex as it is - keeps getting more and more complex," says Steve Mullins, vice president - customer experience for Cingular Wireless.
Cingular's agents needed a user-friendly method for accessing answers to customers' frequently asked questions (FAQs) and troubleshooting procedures.
To that end, Cingular looked to similarly complex businesses in the computer industry, such as Dell and Microsoft. Mullins and colleague Monica Browning, Cingular's director - knowledge management, met with several knowledge management software vendors. They learned how the vendors' tools operate and they visited end users.
"We thought about how [the knowledge management software] would integrate with what we envisioned the future desktop to look like," says Mullins. "This system would be the foundation for what we use throughout all of our departments."
The company chose ServiceWare's (Edison, NJ) eService Suite. ServiceWare's decision integrity department helped Cingular put together a basis for proving the software's return on investment (ROI).
"Before we began to roll out the software, we made sure that we had support from the CIO all the way down," says Mullins. "We acquainted executives with the power of the tool. And we carried out a campaign to let employees know what to expect."
Cingular began implementation by rolling out eService Suite to its tech support department across three call centers. Cingular enlisted the help of consultancies Cap Gemini Ernst and Young (New York, NY) and Innovative Management Solutions (Moorestown, NJ).
"Initially populating the knowledge base was a combined effort between internal employees who are familiar with wireless features and services and an external authoring group from Innovative Management Solutions," says Browning.
Agents access the knowledge base on-line by entering a unique ID name and password. Depending on their user profiles, they view only information that's relevant to them.
To search the knowledge base, agents use natural language to state the particular issue or problem. The software matches their search against a list of authored issues. Agents then select the appropriate issue; the software presents potential resolutions.
"The software uses a complex algorithm to decide the order to list the issues based partly on the exact text and phrase matching," says Browning. "[The software] can also match synonyms and give extra weight to particular things. The more you give it, the easier it is for the system to provide the solution closest to your issue."
Agents can provide feedback by using the software's contribute button located on their toolbars. For example, if they follow a certain path to a correct solution and notice that it didn't mention an important step, agents click the contribute button. The software automatically records their steps and displays a screen with a field for them to enter their feedback. Cingular's knowledge management team can access agents' input and make the appropriate changes.
The knowledge management team consists of about 25 people who maintain the knowledge base full time. Members of the team are located in Cingular's Atlanta headquarters.
The team works with members of Cingular's different departments. For example, if there's a new product launch, the marketing department uses templates to provide the product's details to the knowledge management team. The team then collaborates with designated subject matter experts to put the appropriate information into a user-friendly format on the knowledge base.
To input all the data into the knowledge base - a major challenge - the company divided the process into phases. Browning estimates that it was about four months before the knowledge base was ready for the first group of users.
"We have about 80% to 90% of all the tech support info ready now and a good 70% to 80% of the general info," says Mullins. Next on Cingular's list is to input all of its rate plans, which is no easy task considering the large number of plans.
Cingular also expects to have the knowledge base ready for customers' use before the end of 2003. Customers will be able to access instructions for using wireless services and features, handsets and other devices that Cingular carries, plus troubleshooting tips. The knowledge base will be available to customers on-line and in Cingular's retail stores.
For the longer term, Cingular aims to provide the knowledge base's basic information through the company's IVR system. Using speech recognition, customers will be able to use natural language to search for basic queries.
Centering Around Knowledge
A lack of knowledge wasn't a problem for Fort Collins, Colorado-based outsourcer Center Partners. Instead, agents working on one client's account had a hard time managing all of the client's disparate info, which includes Web pages from the client and the client's vendor suppliers.
Because Center Partners had no control over the client's information they had to work with what they had. The main goal was to find a way to better organize information so that agents can access it faster and easier. The outsourcer was also striving to improve the quality of service that agents provide to its client's customers.
"Agents were frustrated because the knowledge wasn't organized in a way that would allow them to rapidly and efficiently get to it," says David Geiger, Center Partners' chief information officer. "Because of the cumbersome navigation, one of the biggest problems was that agents weren't using the information. [They] were instead trying to memorize a large number of details, which change daily."
After researching their options, Center Partners decided on TheBrain Technologies' (Santa Monica, CA) BrainEKP. As an overlay client system, the knowledge base allowed the company to better map client info without changing it.
However, Center Partners wasn't ready to accept knowledge management at face value. The outfit put together a trial group of 30 agents to test the software - half using BrainEKP and the other half not. Each group contained a scientifically validated sample of people with similar characteristics and skills. The results were striking: Within the group of agents using BrainEKP, quality scores improved 3.6%.
The outsourcer also had an eye on agents' average handle times (AHT), which executives feared might increase once agents began to use the new tool. However, they were pleasantly surprised when agents using BrainEKP saw a decrease in AHT of 43 seconds. Geiger says the test results have a statistical confidence level of between 95% and 97%.
"One of our huge key success factors was running the experiment," says Geiger. "We learned a lot from it. For example, in the third week we weren't seeing the results we thought we should. And [we] realized that we didn't give agents enough time to individually configure their own set of knowledge tools."
Individual customization is an important factor for Center Partners. "People index information differently," says Geiger. "TheBrain lets each individual agent customize his or her knowledge environment. For example, agents can take existing navigation and re-map it so that it's more aligned with their individual cognitive indexing."
Another example of customization is a feature that lets agents flag issues or topics they frequently encounter. These flagged issues appear as icons on their interface for easy access at all times.
Chris Kneeland, Center Partners' chief learning officer, emphasizes the high degree of collaboration between Center Partners' IT, Learning and Development, and Operations departments during implementation.
"You can have the best technology in the world but if you can't get it implemented and have people trained and actually using it, you can't make the benefits real," says Kneeland.
Center Partners developed a team of knowledge managers who are responsible for the software. Transforming the company's existing information into useful knowledge objects required the involvement of all three departments.
The knowledge managers are a cross-functional team of employees from the Operations and the Learning and Development teams.
Since information in the client's system changes constantly, TheBrain has helped Center Partners develop a system for flagging new information. And agents contribute to the knowledge base by clicking a feedback button on each page. This alerts knowledge managers to places where the knowledge is confusing or incomplete.
"Agents realize that they can contribute to improve the way we work," says Kneeland. "They're active participants, rather than just recipients of information."
Coaches use quality monitoring software from Verint to record agents' phone calls and capture on-screen work to ensure that agents use the software properly.
At press time, Center Partners was still rolling out the software to three of its seven call centers, where they plan to have about 250 of its 2,500 agents using the software. The company also hopes to extend the tool for external use.
"As we become better knowledge managers and continue to re-map knowledge, we can export this directly out to the customers," says Geiger.
"TheBrain transformed our knowledge base from 'data' to information that is useable and quickly accessible by every agent, [irrespective] of their skill or their particular familiarity with the system," he says.
We want to hear from you! Please e-mail: joherron@cmp.com
Custom-Made Knowledge
Most people eat during their lunch hour. Many expend the daily break to eat up calories at the gym. And a worthy few use the time to build world-class knowledge bases.
Falling into the last category is Mike Knapp, project manager with Crucial Technology's information systems department. He is also, you may recall, one of last month's Customer Care Leadership award-winners.
A chief reason we named Knapp Best IT Analyst was because of his innovative contributions to Crucial Technology, a division of memory-chip manufacturer Micron. Capping those contributions: development of Crucial's Memory Selector knowledge base.
Formed in 1996, Crucial is a direct sales channel for products that include memory upgrades, compact flash cards, multimedia cards and video cards. Customers can purchase memory upgrades by speaking with agents in Crucial's call centers in Meridian, ID, and East Kilbride, Scotland. Or they can order products over the Web.
Crucial offers more than 94,000 upgrades for more than 15,000 systems. That adds up to a lot of upgrades that customers can choose from, so orders can get quite complicated. Since customers needed to know their system's technical specifications to find compatible memory products, they were often at risk for selecting and ordering the wrong parts.
In 1997, Crucial's tech support department approached Knapp to devise a way to provide agents with the existing information in Crucial's database. Knapp worked during his lunch hour to create the first prototype of the Crucial Memory Selector.
The Memory Selector enables agents and on-line customers to view information about all of Crucial's PC, laptop, server and printer memory upgrades. By organizing systems based on their compatibility with Crucial's memory upgrades, the software ensures that customers make accurate purchasing decisions.
For example, when you visit Crucial's Web site, you enter your computer's make and model and the Memory Selector returns a list of all the Crucial memory upgrades that are compatible with your system.
"We take more than 90% of all our orders across our Web site," says Knapp. "Because of the Memory Selector customers are able to select their memory and place their own order without any intervention from agents."
Although customers can also purchase memory products on-line without using the Memory Selector, Crucial encourages customers to use the software by offering a 10% discount on purchases made over the Web; and by guaranteeing that purchases made through the Memory Selector will be 100% compatible with their system.
The Memory Selector has an added bonus. According to Knapp, the purchases with the lowest rate of return are those that are made through the software.
Our Apologies
In our feature on IVR and speech recognition software that appeared in the November 2002 issue of Call Center Magazine, we mistakenly referred to Intervoice's IVR software as Ombia on page 36. The correct name of the product is Omvia.
Title
�
Improving the Linux vs. Microsoft debate: The impact of TCO
Date
�
2002.05.31 4:39
Author
�
Guest
Topic
�
Business
http://newsforge.com/article.pl?sid=02/05/30/2119253
- By Jack Bryar -
Is there a chance that Microsoft, proprietary Unix vendors and the Open Source community might stop swearing at each other and actually begin providing
potential customers with usable information? Judging from the news of the last
week or two, it hardly seems possible. However, both Microsoft and the Linux
community are beginning to use cost and performance arguments that can
help their potential customers make intelligent choices. It might even
help repair the tattered reputations of a few vendors.
The scare tactics and disinformation campaigns of Microsoft, Sun,
and even some Open Source advocates have become boring and tiresome as well
as shrill and incredible. Sun is still recovering from its crude disinformation campaign targeting Linux users. John
Stenbit, the bluff, burly chief information officer for the U.S.
Department of Defense had to
personally swat down a whispering campaign by Microsoft that
claimed Stenbit's flirtation with Open Source posed a threat to
national security. Stenbit, the former chief of TRW's systems integration
business, has been advocating a
highly decentralized network-centric systems architecture [link is PDF]
that doesn't sound much like Microsoft's vision of the future, and he's
considered a Unix guy in defense technology circles. Open Source advocates who
compare Microsoft to drug
pushers and corrupters
of foreign governments don't do the credibility of Linux community
any favors, either. Is it no wonder that the many large customers are so turned off? A senior IT manager I spoke to this week put it best when
he justified the freeze in his systems budget, saying, "We're waiting for
our vendors to grow up."
A couple of years ago, I suggested that the only way to re-energize
the market was to improve the debate about the merits of Linux vs. Windows.
It was past time for some honest comparisons of the type that
matters most to the average company. Vendors needed to explain the difference
between Linux and Microsoft systems in the only manner that means
anything to the average business manager: cost. Evangelists for Linux (or that
matter, Unix or Windows) should have to compare the cost -- the totalcost -- of owning a computing system based on Linux compared to the total
cost of owning an enterprise architecture based on a proprietary Unix or
Microsoft platform.
These "total cost of ownership" (TCO) calculations are a little more
complicated than comparing the cost of a package of a couple of Red Hat
CDs to the cost of thousands of Microsoft licenses. Even at
Microsoft prices, the cost of software is only a small part of
the total cost of managing a large IT environment. In recent years,
Gartner, KPMG, and Forrester have all issued reports that agree that hardware
and software combined represent less than 20% of the total cost of owning a
system.
Far more important are the costs of running the local
help desk, performing system audits and other administrative functions, and writing and supporting custom code. At most larger companies, these and
similar costs are far more significant than the price of the enterprise
software platform. So are the costs associated with improving or
degrading employee efficiency. A California-based financial services company I
worked with last year calculated it could justify the cost of a wholesale
upgrade of its enterprise architecture based on the fact the the proposed
system could shave five seconds off the time it took most of their 2,000 employees
to log on to their complex of system resources.
Reliable TCO assessments can be hard to do. Many firms attempting a
TCO analysis quickly give up. There claim there are simply too many
factors to account for. Many are difficult to calculate objectively. Most
evaluators lack the required expertise in accurate cost accounting and performance
benchmarking Even so, most companies can, if they try, assess
the financial impact of :
Controlling the number of desktop and server images
Reducing logon failures
Reducing support costs
Minimizing end-user training costs
Lowering set-up costs
Enhancing flexibility by allowing users to "work from anywhere"
while accessing the system resources needed to do their jobs
Other factors are harder to quantify. For example, what is the
degree of risk associated with reducing unauthorized access to system
resources by outsiders? And what is the financial impact?
After a slow start, there's a growing trend by vendors and
purchasers to perform TCO analyses when justifying competing IT platforms. Firms
like Compaq have introduced TCO as
an element of their sales and market education programs. Apple enthusiasts are
promoting TCO analyses to justify retaining Macs in the workplace.
A variety of organizations have tried to use TCO analyses justify
the deployment of Linux throughout the enterprise.
Many of
these Open Source "analyses" are fairly primitive. Some do little more than
wonder aloud, "what possible combination of costs could Windows servers offer
a lower TCO than Linux or OpenBSD servers?" without an honest look
at those costs. Others have done their homework. Last year, Red Hat
sponsored a
devastating IDC study that quantified the TCO benefits of running Linux compared to Unix. The study confirmed the findings of a
1999 Gartner Group study that suggested companies deploying a Linux
platform would enjoy a 20% overall cost advantage compared to firms running
Unix. Proprietary Unix vendors have been on the defensive ever since.
What about Microsoft? A study for the U.S. Department of Defense
conducted by MITRE [PDF] gave the lead to Linux over Microsoft NT in the back office because Linux was easier to manage, had more robust security features,
and supported remote monitoring and management more effectively. MITRE
found that each of these features resulted in measurable cost savings and risk reduction. A
study for LinuxWorld.com was less credible, because the author cooked
the books a little. For example, he assumed that Microsoft-based systems
would require all parties to upgrade upgrade their equipment and software
every few years, but that, for some reason, Linux users somehow would
add nothing to their desktops and system managers would add nothing to their back-end systems during the same period.
In any case, Microsoft's product managers claim that much of the
reliability and support costs cited in these studies were specific to the
limitations in NT. Beginning last September, they began to use TCO justifications
to promote Windows 2000 as "An Operating System Even a CFO Would Love." Microsoft has put together a "Rapid Economic Justification" SWAT team to promote sales based on quick returns on company investments in IT hardware and software. In
addition, the company has begun to publish a series of white papers and
customer profiles showing, the company claims, that Windows 2000 and its successors are generating dramatic improvements in reliability and significantly lowered
administrative and support costs.
Are these claims for real?
Compared to NT, they almost certainly are. Even in mixed NT/Unix
environments, it is not hard to believe the numbers in an upcoming customer study
that claims a company could reduce its internal domains, cut the number of
servers and reduce IT support staffing by 20% by moving off NT to a
different operating platform. Other Microsoft-sponsored TCO studies, such
as one focused on Allegis, merit a slightly more skeptical look.
According to that study, Allegis claims it could generate new product faster
running on Windows 2000 than on Unix. Perhaps.
In any case, such papers, and similar efforts by Open Source and
proprietary vendors represent a marked improvement over earlier, cruder efforts to draw cost comparisons between Microsoft and its
competition. Moreover, they advance the competitive discussion. Competing systems
vendors from the Microsoft, Linux and Unix communities need to move away from from rancorous exchanges of disinformation and begin to focus on issues that
will advance the interests of their customers in a safe, secure,
inexpensive operating environment. If they compete hard enough, their products
might improve. And that wouldn't be a bad thing, either.
Links
"recovering from its crude disinformation campaign" - http://newsforge.com/article.pl?sid=02/05/23/1634224&mode=thread&tid=23
"to personally swat down " - http://www.eetimes.com/sys/news/OEG20020523S0065
"a highly decentralized network-centric systems architecture" - http://www.c3i.osd.mil/org/cio/doc/testeval.pdf
"drug pushers" - http://newsforge.com/comments.pl?sid=23694&cid=13748
"corrupters of foreign governments" - http://newsforge.com/comments.pl?sid=23694&cid=13809
"have introduced TCO" - http://www.compaq.com/tco/
"are promoting TCO analyses " - http://www.applelinks.com/articles/2002/05/20020528133533.shtml
"Many of these Open Source "analyses"" - http://geodsoft.com/opinion/server_comp/tco.htm
"a devastating IDC study" - https://redhat1.rgc2.net/servlet/website/ResponseForm?koE.2eLss09v
"MITRE " - http://www.mitre.org/support/papers/tech_papers_01/kenwood_software/kenwood_software.pdf
"A study for LinuxWorld.com" - http://www.linuxworld.com/site-stories/2001/1018.tco.html
"An Operating System Even a CFO Would Love" - http://einsite.bitpipe.com/data/detail?id=1017778317_238&type=RES&x=1322322521
"Rapid Economic Justification" - http://www.microsoft.com/presspass/features/2002/apr02/04-10businessvalue.asp
"customer profiles" - http://einsite.bitpipe.com/data/detail?id=1018967338_45&type=RES&x=1951772936
"such as one focused on Allegis" - http://www.microsoft.com/business/casestudies/microsoft_allegis.asp
"earlier, cruder efforts" - http://einsite.bitpipe.com/data/detail?id=1005257449_624&type=RES&x=1751407872
�
Improving the Linux vs. Microsoft debate: The impact of TCO
Date
�
2002.05.31 4:39
Author
�
Guest
Topic
�
Business
http://newsforge.com/article.pl?sid=02/05/30/2119253
- By Jack Bryar -
Is there a chance that Microsoft, proprietary Unix vendors and the Open Source community might stop swearing at each other and actually begin providing
potential customers with usable information? Judging from the news of the last
week or two, it hardly seems possible. However, both Microsoft and the Linux
community are beginning to use cost and performance arguments that can
help their potential customers make intelligent choices. It might even
help repair the tattered reputations of a few vendors.
The scare tactics and disinformation campaigns of Microsoft, Sun,
and even some Open Source advocates have become boring and tiresome as well
as shrill and incredible. Sun is still recovering from its crude disinformation campaign targeting Linux users. John
Stenbit, the bluff, burly chief information officer for the U.S.
Department of Defense had to
personally swat down a whispering campaign by Microsoft that
claimed Stenbit's flirtation with Open Source posed a threat to
national security. Stenbit, the former chief of TRW's systems integration
business, has been advocating a
highly decentralized network-centric systems architecture [link is PDF]
that doesn't sound much like Microsoft's vision of the future, and he's
considered a Unix guy in defense technology circles. Open Source advocates who
compare Microsoft to drug
pushers and corrupters
of foreign governments don't do the credibility of Linux community
any favors, either. Is it no wonder that the many large customers are so turned off? A senior IT manager I spoke to this week put it best when
he justified the freeze in his systems budget, saying, "We're waiting for
our vendors to grow up."
A couple of years ago, I suggested that the only way to re-energize
the market was to improve the debate about the merits of Linux vs. Windows.
It was past time for some honest comparisons of the type that
matters most to the average company. Vendors needed to explain the difference
between Linux and Microsoft systems in the only manner that means
anything to the average business manager: cost. Evangelists for Linux (or that
matter, Unix or Windows) should have to compare the cost -- the totalcost -- of owning a computing system based on Linux compared to the total
cost of owning an enterprise architecture based on a proprietary Unix or
Microsoft platform.
These "total cost of ownership" (TCO) calculations are a little more
complicated than comparing the cost of a package of a couple of Red Hat
CDs to the cost of thousands of Microsoft licenses. Even at
Microsoft prices, the cost of software is only a small part of
the total cost of managing a large IT environment. In recent years,
Gartner, KPMG, and Forrester have all issued reports that agree that hardware
and software combined represent less than 20% of the total cost of owning a
system.
Far more important are the costs of running the local
help desk, performing system audits and other administrative functions, and writing and supporting custom code. At most larger companies, these and
similar costs are far more significant than the price of the enterprise
software platform. So are the costs associated with improving or
degrading employee efficiency. A California-based financial services company I
worked with last year calculated it could justify the cost of a wholesale
upgrade of its enterprise architecture based on the fact the the proposed
system could shave five seconds off the time it took most of their 2,000 employees
to log on to their complex of system resources.
Reliable TCO assessments can be hard to do. Many firms attempting a
TCO analysis quickly give up. There claim there are simply too many
factors to account for. Many are difficult to calculate objectively. Most
evaluators lack the required expertise in accurate cost accounting and performance
benchmarking Even so, most companies can, if they try, assess
the financial impact of :
Controlling the number of desktop and server images
Reducing logon failures
Reducing support costs
Minimizing end-user training costs
Lowering set-up costs
Enhancing flexibility by allowing users to "work from anywhere"
while accessing the system resources needed to do their jobs
Other factors are harder to quantify. For example, what is the
degree of risk associated with reducing unauthorized access to system
resources by outsiders? And what is the financial impact?
After a slow start, there's a growing trend by vendors and
purchasers to perform TCO analyses when justifying competing IT platforms. Firms
like Compaq have introduced TCO as
an element of their sales and market education programs. Apple enthusiasts are
promoting TCO analyses to justify retaining Macs in the workplace.
A variety of organizations have tried to use TCO analyses justify
the deployment of Linux throughout the enterprise.
Many of
these Open Source "analyses" are fairly primitive. Some do little more than
wonder aloud, "what possible combination of costs could Windows servers offer
a lower TCO than Linux or OpenBSD servers?" without an honest look
at those costs. Others have done their homework. Last year, Red Hat
sponsored a
devastating IDC study that quantified the TCO benefits of running Linux compared to Unix. The study confirmed the findings of a
1999 Gartner Group study that suggested companies deploying a Linux
platform would enjoy a 20% overall cost advantage compared to firms running
Unix. Proprietary Unix vendors have been on the defensive ever since.
What about Microsoft? A study for the U.S. Department of Defense
conducted by MITRE [PDF] gave the lead to Linux over Microsoft NT in the back office because Linux was easier to manage, had more robust security features,
and supported remote monitoring and management more effectively. MITRE
found that each of these features resulted in measurable cost savings and risk reduction. A
study for LinuxWorld.com was less credible, because the author cooked
the books a little. For example, he assumed that Microsoft-based systems
would require all parties to upgrade upgrade their equipment and software
every few years, but that, for some reason, Linux users somehow would
add nothing to their desktops and system managers would add nothing to their back-end systems during the same period.
In any case, Microsoft's product managers claim that much of the
reliability and support costs cited in these studies were specific to the
limitations in NT. Beginning last September, they began to use TCO justifications
to promote Windows 2000 as "An Operating System Even a CFO Would Love." Microsoft has put together a "Rapid Economic Justification" SWAT team to promote sales based on quick returns on company investments in IT hardware and software. In
addition, the company has begun to publish a series of white papers and
customer profiles showing, the company claims, that Windows 2000 and its successors are generating dramatic improvements in reliability and significantly lowered
administrative and support costs.
Are these claims for real?
Compared to NT, they almost certainly are. Even in mixed NT/Unix
environments, it is not hard to believe the numbers in an upcoming customer study
that claims a company could reduce its internal domains, cut the number of
servers and reduce IT support staffing by 20% by moving off NT to a
different operating platform. Other Microsoft-sponsored TCO studies, such
as one focused on Allegis, merit a slightly more skeptical look.
According to that study, Allegis claims it could generate new product faster
running on Windows 2000 than on Unix. Perhaps.
In any case, such papers, and similar efforts by Open Source and
proprietary vendors represent a marked improvement over earlier, cruder efforts to draw cost comparisons between Microsoft and its
competition. Moreover, they advance the competitive discussion. Competing systems
vendors from the Microsoft, Linux and Unix communities need to move away from from rancorous exchanges of disinformation and begin to focus on issues that
will advance the interests of their customers in a safe, secure,
inexpensive operating environment. If they compete hard enough, their products
might improve. And that wouldn't be a bad thing, either.
Links
"recovering from its crude disinformation campaign" - http://newsforge.com/article.pl?sid=02/05/23/1634224&mode=thread&tid=23
"to personally swat down " - http://www.eetimes.com/sys/news/OEG20020523S0065
"a highly decentralized network-centric systems architecture" - http://www.c3i.osd.mil/org/cio/doc/testeval.pdf
"drug pushers" - http://newsforge.com/comments.pl?sid=23694&cid=13748
"corrupters of foreign governments" - http://newsforge.com/comments.pl?sid=23694&cid=13809
"have introduced TCO" - http://www.compaq.com/tco/
"are promoting TCO analyses " - http://www.applelinks.com/articles/2002/05/20020528133533.shtml
"Many of these Open Source "analyses"" - http://geodsoft.com/opinion/server_comp/tco.htm
"a devastating IDC study" - https://redhat1.rgc2.net/servlet/website/ResponseForm?koE.2eLss09v
"MITRE " - http://www.mitre.org/support/papers/tech_papers_01/kenwood_software/kenwood_software.pdf
"A study for LinuxWorld.com" - http://www.linuxworld.com/site-stories/2001/1018.tco.html
"An Operating System Even a CFO Would Love" - http://einsite.bitpipe.com/data/detail?id=1017778317_238&type=RES&x=1322322521
"Rapid Economic Justification" - http://www.microsoft.com/presspass/features/2002/apr02/04-10businessvalue.asp
"customer profiles" - http://einsite.bitpipe.com/data/detail?id=1018967338_45&type=RES&x=1951772936
"such as one focused on Allegis" - http://www.microsoft.com/business/casestudies/microsoft_allegis.asp
"earlier, cruder efforts" - http://einsite.bitpipe.com/data/detail?id=1005257449_624&type=RES&x=1751407872
Title
�
Commentary: Linux myths that never die
Date
�
2003.01.04 7:18
Author
�
tina
Topic
�
Advocacy
http://newsforge.com/article.pl?sid=03/01/04/1221251
- by John Fitzgibbon -
As a long-time Microsoft Windows user who recently switched almost exclusively
to Linux, I'd like to share a few observations on the transition.
Rather than writing an exhaustive feature comparison, I'm going to look at a
few common (and incredibly persistent) myths about Linux, comparing the myth
with my own experience. I emphasize that this is not a technical analysis of
Windows/Linux pros and cons - it's a purely subjective study based on my
personal experiences with hardware and software I use every day.
For the Linux faithful my observations will probably read like old news, but
these myths are so ingrained in the Windows culture that I think this news
bears repeating.
So, in no particular order...
Myth: Linux support for power management is second-rate.
Fact: For me, the most important aspect of power management is the
"suspend" function on my laptop. I've found the Linux suspend function works
flawlessly, and suspend/resume operations are much faster than under Windows
2K. For easy access, I added a "suspend" button to the taskbar beside the
"lock"/"log-out" buttons.
Myth: Only techno-geeks can keep Linux software up to date.
Fact: Red Hat's Update Agent updates all my Red Hat software with the
click of a few buttons. I get emailed notifications when updates are
available. I decide when and what I want to upgrade, which suits me just
fine.
Myth: A switch to Linux means all my Windows "stuff" will be lost.
Fact: I installed Linux on a separate hard disk partition. The Linux
boot manager, (GRUB), allows me to boot either Windows or Linux. When I boot
Linux, all my old Windows drives are mounted and fully accessible, (I mount
them as /win/C, /win/D, etc. so things are easy to find). I installed my
Windows fonts on Linux using the graphical font manager, so documents look
pretty much as they did under Windows. I haven't had any problems opening my
Microsoft Office documents using OpenOffice, (though I confess that I don't use many advanced MS Office features - your mileage may vary if you're a true
Microsoft-techy). I use samba to mount remote Windows drives, so I haven't
needed to switch O/S on my file servers.
Myth: Linux does not support a wide range of devices.
Fact: I use DVD, CD, wireless networking, wireless keyboard and mouse,
Rio MP3 player and various other USB devices on my laptop. On my desktop I
use scanners, printers, cameras and a TV card. I've had no problems getting
any device to work. In some cases the drivers are not available on the
installation CDs, so a little "googling" has been required to find what I
need. I suspect this has more to do with Microsoft's monopoly and dubious
licensing practices than any failure on the part of the Linux community. I
have occasionally had to do "make; make install" operations from the command
line, but, frankly, this is not as scary or technically demanding as certain
people might have you believe.
The bottom line is that most things I need to do on a day-to-day basis I can
do as well, or better, with Linux. And, needless to say, the TCO myth isn't
even worth talking about.
Got something to say about Open Source business, Linux advocacy, or anything else of interest to our audience? Submit an HTML-formatted commentary to editors@newsforge.com and we might publish it.
Links
"John Fitzgibbon" - http://www.jfitz.com/resume
�
Commentary: Linux myths that never die
Date
�
2003.01.04 7:18
Author
�
tina
Topic
�
Advocacy
http://newsforge.com/article.pl?sid=03/01/04/1221251
- by John Fitzgibbon -
As a long-time Microsoft Windows user who recently switched almost exclusively
to Linux, I'd like to share a few observations on the transition.
Rather than writing an exhaustive feature comparison, I'm going to look at a
few common (and incredibly persistent) myths about Linux, comparing the myth
with my own experience. I emphasize that this is not a technical analysis of
Windows/Linux pros and cons - it's a purely subjective study based on my
personal experiences with hardware and software I use every day.
For the Linux faithful my observations will probably read like old news, but
these myths are so ingrained in the Windows culture that I think this news
bears repeating.
So, in no particular order...
Myth: Linux support for power management is second-rate.
Fact: For me, the most important aspect of power management is the
"suspend" function on my laptop. I've found the Linux suspend function works
flawlessly, and suspend/resume operations are much faster than under Windows
2K. For easy access, I added a "suspend" button to the taskbar beside the
"lock"/"log-out" buttons.
Myth: Only techno-geeks can keep Linux software up to date.
Fact: Red Hat's Update Agent updates all my Red Hat software with the
click of a few buttons. I get emailed notifications when updates are
available. I decide when and what I want to upgrade, which suits me just
fine.
Myth: A switch to Linux means all my Windows "stuff" will be lost.
Fact: I installed Linux on a separate hard disk partition. The Linux
boot manager, (GRUB), allows me to boot either Windows or Linux. When I boot
Linux, all my old Windows drives are mounted and fully accessible, (I mount
them as /win/C, /win/D, etc. so things are easy to find). I installed my
Windows fonts on Linux using the graphical font manager, so documents look
pretty much as they did under Windows. I haven't had any problems opening my
Microsoft Office documents using OpenOffice, (though I confess that I don't use many advanced MS Office features - your mileage may vary if you're a true
Microsoft-techy). I use samba to mount remote Windows drives, so I haven't
needed to switch O/S on my file servers.
Myth: Linux does not support a wide range of devices.
Fact: I use DVD, CD, wireless networking, wireless keyboard and mouse,
Rio MP3 player and various other USB devices on my laptop. On my desktop I
use scanners, printers, cameras and a TV card. I've had no problems getting
any device to work. In some cases the drivers are not available on the
installation CDs, so a little "googling" has been required to find what I
need. I suspect this has more to do with Microsoft's monopoly and dubious
licensing practices than any failure on the part of the Linux community. I
have occasionally had to do "make; make install" operations from the command
line, but, frankly, this is not as scary or technically demanding as certain
people might have you believe.
The bottom line is that most things I need to do on a day-to-day basis I can
do as well, or better, with Linux. And, needless to say, the TCO myth isn't
even worth talking about.
Got something to say about Open Source business, Linux advocacy, or anything else of interest to our audience? Submit an HTML-formatted commentary to editors@newsforge.com and we might publish it.
Links
"John Fitzgibbon" - http://www.jfitz.com/resume
What Does Free Mean? or What do you mean by Free Software?
Note: In February 1998 a group moved to replace the term
"Free Software"
with "Open Source
Software". As will become clear in the
discussion below, they both refer to essentially the same thing.
Many people new to free software find themselves confused because
the word "free" in the term "free software" is not used the way they expect.
To them free means "at no cost".
An English dictionary lists almost twenty different meanings for "free".
Only one of them is "at no cost". The rest refer to liberty
and lack of constraint. When we speak of Free Software,
we mean freedom, not price.
Software that is free only in the sense that you don't need to pay
to use it is hardly free at all. You may be forbidden to pass it on,
and you are almost certainly prevented from improving it. Software
licensed at no cost is usually a weapon in a marketing campaign to
promote a related product or to drive a smaller competitor out of
business. There is no guarantee that it will stay free.
Truly free software is always free. Software that is placed in the
public domain can be snapped up and put into non-free programs. Any
improvements then made are lost to society.
To stay free, software must be copyrighted and licensed.
To the uninitiated, either a piece of software is free or it isn't. Real life
is much more complicated than that. To understand what kinds of things people
are implying when they call software free we must take a little detour into
the world of software licenses.
Copyrights are a method of protecting the rights of the creator of
certain types of works. In most countries, software you write is automatically copyrighted.
A license is the authors way of allowing use of his creation (software in this case),
by others, in ways that are acceptable to him.
It is up to the author to include a license which declares in what ways the software may be used.
For a proper discussion of copyright see http://lcweb.loc.gov/copyright/.
Of course, different circumstances call for different licenses.
Software companies are looking to protect their assets so they only release compiled code
(which isn't human readable) and put many restrictions on the use of the software.
Authors of free software on the other hand are generally looking for some combination of the following:
Not allowing use of their code in proprietary software. Since they are releasing their code for all to use, they don't want to see others steal it.
In this case, use of the code is seen as a trust: you may use it, as
long as you play by the same rules.
Protecting identity of authorship of the code. People take great pride in their work and do not want someone else to come along and remove their name from it or claim that
they wrote it.
Distribution of source code. One of the problems with most commercial code is that you
can't fix bugs or customize it since the source code is not available. Also, the company
may decide to stop supporting the hardware you use. Many free licenses force the distribution of the source code. This protects the user by allowing them to
customize the software for their needs. This also has other
ramifications which will be discussed later.
Forcing any work that includes part of their work (such works are called derived works in copyright discussions) to use the same license.
Many people write their own license. This is frowned upon as writing a
license that does what you want involves subtle issues. Too often the wording used is
either ambiguous or people create conditions that conflict with each other.
Writing a license that would hold up in court is even harder.
Luckily, there are a number of licenses already written that probably
do what you want.
Three of the most widely found licenses are:
The GNU General Public
License (GPL). Some good background information on software licenses
and a copy of the license can be found at
the GNU web site.
This is the most common free license in use in the world.
Artistic License.
BSD style license.
Some of the features these licenses have in common.
You can install the software on as many machines as you want.
Any number of people may use the software at one time.
You can make as many copies of the software as you want and give them
to whomever you want (free or open redistribution).
There are no restrictions on modifying the software (except for keeping certain notices intact).
There is no restriction on distributing, or even selling, the software.
This last point, which allows the software to be sold for money seems to go
against the whole idea of free software. It is actually one of its strengths.
Since the license allows free redistribution, once one person gets a copy
they can distribute it themselves. They can even try to sell it.
In practice, it costs essentially no money to make electronic
copies of software. Supply and demand will keep the cost down. If it
is convenient for a large piece of software or an aggregate of software
to be distributed by some media, such as CD, the vendor is free to charge
what they like. If the profit margin is too high, however, new vendors will
enter the market and competition will drive the price down.
While free software is not totally free of constraints (only putting something
in the public domain does that) it gives the user the flexibility
to do what they need in order to get work done. At the same time, it protects
the rights of the author. Now that's freedom.
The Problem With Linux & OSS in general
The problem with Linux and other FLOSS spin-offs can be neatly encapsulated in a few words � �a whole generation of users, trainers and strategic and policy makers (in the academia) have matured with seeing the Microsoft logo on the splash screen�. The author would like to mention at the outset that he has nothing per se against the corporation, however it says a lot about human nature that spoon-feeding has made it lazy and unwilling to unlearn.
The major market/purchaser for software is the government. Given the huge corpus of funds it can bring to the table, it is of no surprise that a vigorous amount of discounting policy should also be at work. Given the size of market inroads that can be achieved, dominant players usually have lenient terms when dealing with the government, both at state and national levels, or its various organs. From a business point of view the next major market, albeit niche, should be the academia. By this I mean the schools, colleges and universities with computer sciences in the course curricula and an active student roster. Unfortunately here is where the game gets a bit murky. Software licensing paradigm represented by Microsoft and like belong to the �proprietary� school of thought. This means that ownership of the software is with the vendor. The user-purchaser is vested with limited usability rights, and the licensing agreement is tenuous at best. Moreover, the licensing agreement when calculated for a per-seat basis leads to huge investments in infrastructure to set up a well-equipped and functionally viable lab. Given the amount of subsidies and cross subsidies in government-funded education, such outflow is not permitted. And in the growing number of private funded educational institutions such investment takes second place in the category of investments. Moreover, given the absurd pricing of the software, in many cases being a conversion price of their offshore rates (in foreign currency), software piracy is implicitly and tacitly encouraged. One should not stop from condoning the action, yet there are quite important reasons as to its rampant presence.
Of late the BSA-NASSCOMM alliance has managed to send out flyers and mailers to individuals culled from a particular database, announcing and encouraging whistle-blowers. With its in-place rewards policy and assurance of anonymity to the informer, this forms a very attractive package. While the results of this campaign are yet to roll in, it is obvious that some institutions are already feeling the heat. Encouraged by the success (es) in case of MIT, a few exemplary and punitive actions have managed to turn the focus on the FLOSS movement. Cases in point are Kalyani Govt Engg College etc (iLUG-Kolkata members are aware of such incidents). Yet such events have not managed to create a mass migration to FLOSS of GNU/GPL software platforms. Reasons for this are manifold, and this article will try to highlight some of them.
1. The diversity of Linux (or FLOSS) distributions leads to a richness of choice, which can be bewildering for the first time migration. Given that each platform/distribution has its own coterie of die-hard fans, it becomes difficult to take an unbiased considerate approach to the migration.
2. Easing the pain of transition � migrating from any environment to a new one requires some amount of slack time to adjust to. In case of migrating to a Linux system, a bit of ingenuity as well as intuition is required together with guidance and training.
The author is of the opinion that the first problem can be solved if a customized distribution based on a target audience profile is marketed. Schools and colleges with Computer Science courses have a very limited application requirement profile (a separate article will discuss the satisfaction of the syllabi). A distribution that addresses these demands should find a ready market. The distribution should be stable, secure, scaleable and suitably priced so as to fill the gap for a de-facto standard and uniform desktop distribution and at the same time be able to serve in a client-server environment.
The second problem is a bit difficult to address. In the first place, institutions willing to give software created under FLOSS a go should be made aware of the licensing regime. The most important factor that the licensing costs are greatly reduced should address the issue of financial control. Local Linux User Groups should actively participate in out-reach programs in tandem with educational institutions, holding demonstrations and technical presentations together with corporate presentations in order to remove the myths and misconceptions about the system. Given the latest releases of the various distributions as a baseline, a very slight difference on the GUI level can be detected with prevailing Microsoft and proprietary lines. This means that the classification should focus on the length of the bang for the buck as opposed to going into finer technical details. Installation fests could be organized using the school and college premises, so as to introduce and welcome more people into the fold.
More problems remain including training for the teachers, hardware problems that require to be addressed among others. A separate article is planned for these.
Friday, January 17, 2003
The platform choice
MSI
Monday, January 13, 2003
As manufacturers look for the best way of using information technology (IT) to support their future business strategies, they should keep two facts in mind.��
First, the future of business computing will center on Internet-based applications. Second, almost all of those applications will be built on one of two software development platforms.��
These two platforms are Java 2 Enterprise Edition--commonly referred to as J2EE--which was introduced roughly three years ago by Sun Microsystems , Santa Clara, Calif.; and Microsoft .NET--pronounced dot Net--which was unveiled by Microsoft Corp. , Redmond, Wash., in 2002.��
Proponents of both platforms contend that they offer everything a company needs to build an IT infrastructure for doing business in the 21st century. Surprisingly, they're both right, although each platform has characteristics that make it more suitable for particular types of businesses.��
"We don't see one platform winning over the other; there will continue to be two camps," says Colleen Niven, vice president of technology research for the Boston-based consulting firm, AMR Research . In general, Niven says, large companies are gravitating toward J2EE while small and medium-size enterprises tend to favor .NET.��
From a functional standpoint, the two platforms are comparable, which is not surprising since they were developed for similar reasons. While these platforms can be used to build an IT infrastructure for any type of business, both Sun and Microsoft have organizations dedicated to helping manufacturers understand how these platforms can meet their particular needs.��
The future of manufacturing��
"If you look at our vision of the future of manufacturing, it includes an IT environment that integrates all aspects of the business," says Bill Gerould, Sun's director of manufacturing. "This environment links the customer side with the supplier side and the employee side, as well with product development, the factory floor, and all of the enterprise business applications.��
"Today, most of those departments operate in silos," Gerould continues. "So if I'm in product development, I buy whatever applications I need to do my individual job, and I don't worry much about what goes on outside of my silo."��
Gerould says Sun's attempt to help manufacturers change this dynamic revolves around an IT architecture called Sun Open Net Environment, or Sun ONE, of which J2EE is the major building block. "The architecture that we see with Sun ONE, as well as with .NET, will move all of an organization's applications to a Web-based infrastructure," Gerould says. "That means even if applications are built to address the needs of specific departments, they can be linked to this Internet backbone, which should make it easier for a company to integrate those applications as they see fit."��
Don Richardson, Microsoft's director of manufacturing industry solutions, says .NET was devised in response to "the struggles we were having with integrating disparate applications. At the time, integration only truly worked when the applications you wanted to integrate ran on the same [operating system], and they almost had to be on the same type of hardware."��
The underlying programming languages are the pieces of both the .NET and J2EE environments that do the most to facilitate smooth system integration. J2EE employs the Java programming language, which Sun introduced several years before it developed the other components of the J2EE framework. When Microsoft developed the .NET framework, it also created a new language called C# (pronounced C sharp), which some programmers contend is a modified version of Java.��
Boosts productivity��
Both of these are object-oriented languages, which means that the logic inside of an application is bundled into small packets, called objects, that can be easily reused to create new applications, or to add functionality and features to existing applications. Programmers say this drastically reduces the time it takes to develop applications, and that is a major reason why the majority of packaged application vendors are abandoning previous generations of software development tools in favor of J2EE or .NET.��
"Because Java is such a productive environment for the programmer, we can develop and deliver our applications much more quickly," says Debbie Schneider, senior product manager with PTC , Needham, Mass., a supplier of CAD and product life-cycle management (PLM) software. "Our customers benefit from our ability to add so many features and functionality to our products."��
In addition to these object-oriented programming languages, the J2EE and .NET frameworks contain a number of features that make it easier for organizations to conduct e-Business. These features, which often are referred to as services, include such things as Web commerce engines and programs that handle security functions such as verifying the identity of users on Internet-based networks.��
Most of the advertising for both J2EE and .NET refers to them as platforms for creating Web services, which have been hailed in many quarters as the next great advance in enterprise computing. Web services essentially are software components that have been outfitted with specific communications protocols that allow them to pass information from one application to another over the Internet, without the need for a direct connection between those two applications, and without regard for which operating systems the applications run on.��
It's unlikely that users of packaged applications will ever need to know anything about Web services protocols. That's because these protocols are embedded in both the J2EE and .NET environments, which makes it easier for application developers to convert pieces of their programs to Web services.��
No more plumbing��
David Willet, chief technologist for Frontstep , an enterprise resources planning (ERP) software supplier based in Columbus, Ohio, says having the .NET framework handle all of this "internal plumbing" leaves application developers free to add more useful features to a software package. Frontstep, which is set to be acquired by Atlanta-based ERP vendor MAPICS early in 2003, built SyteLine 7, the most recent version of its ERP package, on the .NET framework.��
"Before .NET, we spent a lot of time building documents that could travel through firewalls, which required the use of several [Web services protocols]," Willet says. "With .NET, we simply create the documents that we want to pass from one system to the next, and they are automatically delivered in the appropriate manner. This frees us up to be a company that creates business processes that our customers can use, rather than having to constantly master and manage new technology."��
Perhaps the most obvious sign that J2EE and .NET will be the development platforms of the future came roughly 18 months ago, when Walldorf, Germany-based SAP, the world's leading ERP software supplier, announced that it would begin building its applications on the J2EE framework. "The emergence of e-Business, which created the need for an open infrastructure, led to our adoption of J2EE," says Peter Kuerpick, SAP's senior vice president of server technology development. "That puts our applications on a platform in which there is common knowledge in the marketplace. That was appealing to our customers. It means they can work in a more familiar environment if they need to modify an application."��
Learning from history��
An application developer's choice of J2EE or .NET typically has more to do with the developer's history than with its future vision. "J2EE tends to be used more by companies that are moving from client/server applications that ran on the UNIX operating system," AMR's Nivens says. "That's why you see the larger enterprise system vendors like SAP, Oracle , [Redwood Shores, Calif.,] and PeopleSoft [Pleasanton, Calif.] adopting J2EE."��
On the other hand, companies whose previous applications ran on some version of Microsoft's Windows operating system, such as Frontstep, and SYSPRO , Costa Mesa, Calif., are more likely to use .NET. The vendors say there are practical reasons for these choices.��
"We considered both options before deciding on J2EE, because it gives our customers more choices," says Jack Young, an executive vice president with MRO Software , an enterprise asset management software supplier based in Bedford, Mass. "It allows our applications to run on the various UNIX platforms, as well as on Linux and Windows."��
The Java programming language allows J2EE-based applications to run on multiple platforms, but not everyone considers that a virtue. "J2EE has the same problems as the classic UNIX market," says Mike Carnahan, an executive vice president with ROI Systems , a mid-market-focused ERP supplier based in Minneapolis. "There are 16 varieties of it. For years we watched the UNIX wars that allowed Windows to come from nowhere to become the fastest-growing operating system. I think the same thing will happen with J2EE and .NET."��
The J2EE landscape does mirror the UNIX space in many ways. After Sun created J2EE, it released the specifications to the general public, leaving anyone free to develop applications and tools that comply with those specifications. The problems with this approach become most apparent when a company wants to use the J2EE platform as an e-Business backbone. That requires the use of a Web application server, which is a piece of middleware that stores and executes the business logic for Web-based applications. A number of vendors have developed J2EE Web applications servers, and each one has slightly different characteristics.��
Competing Web servers��
The two most popular J2EE Web application servers are WebSphere from IBM Armonk, N.Y; and WebLogic from BEA Systems , San Jose, Calif. Some application developers--including SAP, Oracle, and PTC--also have their own Web application servers.��
Brad Brown is chief architect with The Ultimate Software Consultants , a Lombard, Ill.-based IT services firm that specializes in Oracle consulting. He says Oracle's application server is nearly identical to WebSphere and WebLogic, but he also points out that there are other Web servers on the market, including some open source products, that don't adhere strictly to J2EE standards. "There are differences in the way you write applications for these various Web application servers," he says.��
Brown also concedes that with .NET, "Microsoft has done a nice job of bundling a lot of products together and making it easier to develop applications." The .NET bundle includes the C# programming language, an application development environment called Visual Studio.NET, and a Web application server. With J2EE, users have to assemble all of these pieces on their own.��
The experts advise users to consider their own history when deciding whether to purchase applications based on J2EE or .NET. Many companies are likely to end up with both platforms, even if they choose one as a corporate standard. "The good news is the two environments can co-exist," Nivens says. "There are ways of connecting them, including through Web services."��
While .NET applications tend to have lower cost of ownership because they come in a fully integrated package from a single vendor, the real issue is not J2EE versus .NET. "It's really a question of what your current IT infrastructure looks like, and what skills you have in your organization," says Nivens.��
"The Microsoft shops--those that have Visual Basic developers and experience with NT and SQLServer--will fare best with .NET-based applications," Nivens concludes. "If you're a UNIX house with a bunch of C programmers, you should probably look to the J2EE environment."��
For those reasons alone, these two platforms are bound to co-exist well into the future.��
A multiplatform universe Sidney Hill, Jr. Some application vendors embrace J2EE, .NET , and other platforms��
Industry analysts predict that many companies will deploy both J2EE and .NET in the same enterprise. But it also seems clear that these two platforms of the future will have to accommodate the numerous legacy systems that companies want to keep as part of their infrastructures.��
The realities of business make this type of co-existence necessary, and the concepts on which J2EE and .NET are based--specifically the idea of code reuse--make it possible. Both platforms allow for converting nearly any software code into a program object that can be linked with other objects to support a particular business process.��
This capability is critical to the recent announcement by Epicor Software, Irvine, Calif., that it will create a .NET version of its manufacturing software package that uses business logic created on the OpenEdge platform from Progress Software, Bedford, Mass.��
"This is a time-to-market issue," says Tony Wilby, development director in Epicor's manufacturing solutions group. "We want to get a new manufacturing product on the market by the end of 2003, and to do that we need to leverage the business logic from the successful product that we have written on the Progress platform."��
Progress, recognizing that users want the freedom to choose the platform that suits their particular needs, has been working for some time to make its OpenEdge platform-compatible with both .NET and J2EE, according to David Olson, Progress' director of enterprise solutions.��
"The idea is that our partners can encapsulate their business logic into objects that are appropriate for things such as purchase orders," Olson explains. "They can then create interfaces to extend those objects to the .NET framework."��
The rise of .NET and J2EE has prompted Siebel Systems , San Mateo, Calif., to begin development of two new versions of its customer relationship management (CRM) software suite. The current version of the Siebel product is written in C++ and it uses Siebel's own application server. But Doug Smith, Siebel's vice president of architecture, says the company expects to release both J2EE and .NET versions of its suite by the end of 2004.��
"Analysts predict that there will be an equal mix of companies using J2EE and .NET," Smith says. "But the largest number of companies will use both. We will have J2EE and .NET versions of our product with the same functionality, and they can co-exist within the same customer's organization."�� http://linux.ittoolbox.com/common/print.asp?i=86638
MSI
Monday, January 13, 2003
As manufacturers look for the best way of using information technology (IT) to support their future business strategies, they should keep two facts in mind.��
First, the future of business computing will center on Internet-based applications. Second, almost all of those applications will be built on one of two software development platforms.��
These two platforms are Java 2 Enterprise Edition--commonly referred to as J2EE--which was introduced roughly three years ago by Sun Microsystems , Santa Clara, Calif.; and Microsoft .NET--pronounced dot Net--which was unveiled by Microsoft Corp. , Redmond, Wash., in 2002.��
Proponents of both platforms contend that they offer everything a company needs to build an IT infrastructure for doing business in the 21st century. Surprisingly, they're both right, although each platform has characteristics that make it more suitable for particular types of businesses.��
"We don't see one platform winning over the other; there will continue to be two camps," says Colleen Niven, vice president of technology research for the Boston-based consulting firm, AMR Research . In general, Niven says, large companies are gravitating toward J2EE while small and medium-size enterprises tend to favor .NET.��
From a functional standpoint, the two platforms are comparable, which is not surprising since they were developed for similar reasons. While these platforms can be used to build an IT infrastructure for any type of business, both Sun and Microsoft have organizations dedicated to helping manufacturers understand how these platforms can meet their particular needs.��
The future of manufacturing��
"If you look at our vision of the future of manufacturing, it includes an IT environment that integrates all aspects of the business," says Bill Gerould, Sun's director of manufacturing. "This environment links the customer side with the supplier side and the employee side, as well with product development, the factory floor, and all of the enterprise business applications.��
"Today, most of those departments operate in silos," Gerould continues. "So if I'm in product development, I buy whatever applications I need to do my individual job, and I don't worry much about what goes on outside of my silo."��
Gerould says Sun's attempt to help manufacturers change this dynamic revolves around an IT architecture called Sun Open Net Environment, or Sun ONE, of which J2EE is the major building block. "The architecture that we see with Sun ONE, as well as with .NET, will move all of an organization's applications to a Web-based infrastructure," Gerould says. "That means even if applications are built to address the needs of specific departments, they can be linked to this Internet backbone, which should make it easier for a company to integrate those applications as they see fit."��
Don Richardson, Microsoft's director of manufacturing industry solutions, says .NET was devised in response to "the struggles we were having with integrating disparate applications. At the time, integration only truly worked when the applications you wanted to integrate ran on the same [operating system], and they almost had to be on the same type of hardware."��
The underlying programming languages are the pieces of both the .NET and J2EE environments that do the most to facilitate smooth system integration. J2EE employs the Java programming language, which Sun introduced several years before it developed the other components of the J2EE framework. When Microsoft developed the .NET framework, it also created a new language called C# (pronounced C sharp), which some programmers contend is a modified version of Java.��
Boosts productivity��
Both of these are object-oriented languages, which means that the logic inside of an application is bundled into small packets, called objects, that can be easily reused to create new applications, or to add functionality and features to existing applications. Programmers say this drastically reduces the time it takes to develop applications, and that is a major reason why the majority of packaged application vendors are abandoning previous generations of software development tools in favor of J2EE or .NET.��
"Because Java is such a productive environment for the programmer, we can develop and deliver our applications much more quickly," says Debbie Schneider, senior product manager with PTC , Needham, Mass., a supplier of CAD and product life-cycle management (PLM) software. "Our customers benefit from our ability to add so many features and functionality to our products."��
In addition to these object-oriented programming languages, the J2EE and .NET frameworks contain a number of features that make it easier for organizations to conduct e-Business. These features, which often are referred to as services, include such things as Web commerce engines and programs that handle security functions such as verifying the identity of users on Internet-based networks.��
Most of the advertising for both J2EE and .NET refers to them as platforms for creating Web services, which have been hailed in many quarters as the next great advance in enterprise computing. Web services essentially are software components that have been outfitted with specific communications protocols that allow them to pass information from one application to another over the Internet, without the need for a direct connection between those two applications, and without regard for which operating systems the applications run on.��
It's unlikely that users of packaged applications will ever need to know anything about Web services protocols. That's because these protocols are embedded in both the J2EE and .NET environments, which makes it easier for application developers to convert pieces of their programs to Web services.��
No more plumbing��
David Willet, chief technologist for Frontstep , an enterprise resources planning (ERP) software supplier based in Columbus, Ohio, says having the .NET framework handle all of this "internal plumbing" leaves application developers free to add more useful features to a software package. Frontstep, which is set to be acquired by Atlanta-based ERP vendor MAPICS early in 2003, built SyteLine 7, the most recent version of its ERP package, on the .NET framework.��
"Before .NET, we spent a lot of time building documents that could travel through firewalls, which required the use of several [Web services protocols]," Willet says. "With .NET, we simply create the documents that we want to pass from one system to the next, and they are automatically delivered in the appropriate manner. This frees us up to be a company that creates business processes that our customers can use, rather than having to constantly master and manage new technology."��
Perhaps the most obvious sign that J2EE and .NET will be the development platforms of the future came roughly 18 months ago, when Walldorf, Germany-based SAP, the world's leading ERP software supplier, announced that it would begin building its applications on the J2EE framework. "The emergence of e-Business, which created the need for an open infrastructure, led to our adoption of J2EE," says Peter Kuerpick, SAP's senior vice president of server technology development. "That puts our applications on a platform in which there is common knowledge in the marketplace. That was appealing to our customers. It means they can work in a more familiar environment if they need to modify an application."��
Learning from history��
An application developer's choice of J2EE or .NET typically has more to do with the developer's history than with its future vision. "J2EE tends to be used more by companies that are moving from client/server applications that ran on the UNIX operating system," AMR's Nivens says. "That's why you see the larger enterprise system vendors like SAP, Oracle , [Redwood Shores, Calif.,] and PeopleSoft [Pleasanton, Calif.] adopting J2EE."��
On the other hand, companies whose previous applications ran on some version of Microsoft's Windows operating system, such as Frontstep, and SYSPRO , Costa Mesa, Calif., are more likely to use .NET. The vendors say there are practical reasons for these choices.��
"We considered both options before deciding on J2EE, because it gives our customers more choices," says Jack Young, an executive vice president with MRO Software , an enterprise asset management software supplier based in Bedford, Mass. "It allows our applications to run on the various UNIX platforms, as well as on Linux and Windows."��
The Java programming language allows J2EE-based applications to run on multiple platforms, but not everyone considers that a virtue. "J2EE has the same problems as the classic UNIX market," says Mike Carnahan, an executive vice president with ROI Systems , a mid-market-focused ERP supplier based in Minneapolis. "There are 16 varieties of it. For years we watched the UNIX wars that allowed Windows to come from nowhere to become the fastest-growing operating system. I think the same thing will happen with J2EE and .NET."��
The J2EE landscape does mirror the UNIX space in many ways. After Sun created J2EE, it released the specifications to the general public, leaving anyone free to develop applications and tools that comply with those specifications. The problems with this approach become most apparent when a company wants to use the J2EE platform as an e-Business backbone. That requires the use of a Web application server, which is a piece of middleware that stores and executes the business logic for Web-based applications. A number of vendors have developed J2EE Web applications servers, and each one has slightly different characteristics.��
Competing Web servers��
The two most popular J2EE Web application servers are WebSphere from IBM Armonk, N.Y; and WebLogic from BEA Systems , San Jose, Calif. Some application developers--including SAP, Oracle, and PTC--also have their own Web application servers.��
Brad Brown is chief architect with The Ultimate Software Consultants , a Lombard, Ill.-based IT services firm that specializes in Oracle consulting. He says Oracle's application server is nearly identical to WebSphere and WebLogic, but he also points out that there are other Web servers on the market, including some open source products, that don't adhere strictly to J2EE standards. "There are differences in the way you write applications for these various Web application servers," he says.��
Brown also concedes that with .NET, "Microsoft has done a nice job of bundling a lot of products together and making it easier to develop applications." The .NET bundle includes the C# programming language, an application development environment called Visual Studio.NET, and a Web application server. With J2EE, users have to assemble all of these pieces on their own.��
The experts advise users to consider their own history when deciding whether to purchase applications based on J2EE or .NET. Many companies are likely to end up with both platforms, even if they choose one as a corporate standard. "The good news is the two environments can co-exist," Nivens says. "There are ways of connecting them, including through Web services."��
While .NET applications tend to have lower cost of ownership because they come in a fully integrated package from a single vendor, the real issue is not J2EE versus .NET. "It's really a question of what your current IT infrastructure looks like, and what skills you have in your organization," says Nivens.��
"The Microsoft shops--those that have Visual Basic developers and experience with NT and SQLServer--will fare best with .NET-based applications," Nivens concludes. "If you're a UNIX house with a bunch of C programmers, you should probably look to the J2EE environment."��
For those reasons alone, these two platforms are bound to co-exist well into the future.��
A multiplatform universe Sidney Hill, Jr. Some application vendors embrace J2EE, .NET , and other platforms��
Industry analysts predict that many companies will deploy both J2EE and .NET in the same enterprise. But it also seems clear that these two platforms of the future will have to accommodate the numerous legacy systems that companies want to keep as part of their infrastructures.��
The realities of business make this type of co-existence necessary, and the concepts on which J2EE and .NET are based--specifically the idea of code reuse--make it possible. Both platforms allow for converting nearly any software code into a program object that can be linked with other objects to support a particular business process.��
This capability is critical to the recent announcement by Epicor Software, Irvine, Calif., that it will create a .NET version of its manufacturing software package that uses business logic created on the OpenEdge platform from Progress Software, Bedford, Mass.��
"This is a time-to-market issue," says Tony Wilby, development director in Epicor's manufacturing solutions group. "We want to get a new manufacturing product on the market by the end of 2003, and to do that we need to leverage the business logic from the successful product that we have written on the Progress platform."��
Progress, recognizing that users want the freedom to choose the platform that suits their particular needs, has been working for some time to make its OpenEdge platform-compatible with both .NET and J2EE, according to David Olson, Progress' director of enterprise solutions.��
"The idea is that our partners can encapsulate their business logic into objects that are appropriate for things such as purchase orders," Olson explains. "They can then create interfaces to extend those objects to the .NET framework."��
The rise of .NET and J2EE has prompted Siebel Systems , San Mateo, Calif., to begin development of two new versions of its customer relationship management (CRM) software suite. The current version of the Siebel product is written in C++ and it uses Siebel's own application server. But Doug Smith, Siebel's vice president of architecture, says the company expects to release both J2EE and .NET versions of its suite by the end of 2004.��
"Analysts predict that there will be an equal mix of companies using J2EE and .NET," Smith says. "But the largest number of companies will use both. We will have J2EE and .NET versions of our product with the same functionality, and they can co-exist within the same customer's organization."�� http://linux.ittoolbox.com/common/print.asp?i=86638