Saturday, July 30, 2011

New book: MCTS Self-Paced Training Kit (Exam 70-643): Configuring Windows Server 2008 Applications Infrastructure (2nd Edition)

We’re pleased to announce that MCTS Self-Paced Training Kit (Exam 70-643): Configuring Windows Server 2008 Applications Infrastructure (2nd Edition) (ISBN 9780735648784; 640 pages) is available for purchase here, here, and here. This Training Kit is designed for information (IT) professionals who support or plan to support Windows Server 2008 R2 networks and who also plan to take the Microsoft Certified Technology Specialist (MCTS) 70-643 exam.




Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com


This 2-in-1 kit includes the official Microsoft study guide, plus practice tests on CD to help you assess your skills. It comes packed with the tools and features exam candidates want most—including in-depth, self-paced training based on final exam content; rigorous, objective-by-objective review; exam tips from expert, exam-certified authors; and customizable testing options. It also provides real-world scenarios, case study examples, and troubleshooting labs for the skills and expertise you can use on the job.

You can find the book’s Table of Contents in this previous post.

Here is an excerpt from this Training Kit:
Chapter 2: Configuring Server Storage and Clusters

Storage area networks (SANs), host bus adapters (HBAs), and logical unit numbers (LUNs) were once the sole domain of storage specialists, far removed from the expertise of your average Windows administrator. However, the arrival of new technologies, such as the Windows Virtual Disk service and Internet SCSI (iSCSI), along with the increasingly complex realities of enterprise storage, has brought these once-specialized topics into the realm of Windows Server 2008 administration. To be an effective Windows server administrator today, you still need to know the difference between the various RAID levels, but you also need to know quite a bit more about advanced server storage technologies.

This chapter introduces you to the basics of disk management in Windows Server 2008 R2, along with more advanced storage technologies such as SANs. The chapter then builds upon this storage information to introduce the various clustering technologies available in Windows Server 2008 R2.
Exam objectives in this chapter:

* Configure storage.
* Configure high availability.

Lessons in this chapter:

* Lesson 1: Configuring Server Storage
* Lesson 2: Configuring Server Clusters

Before You Begin

To complete the lessons in this chapter, you must have:

* A computer named Server2 that is running Windows Server 2008 R2. Beyond the disk on which the operating system is installed, Server2 must be equipped with two additional hard disks of equal size.
* A basic understanding of Windows administration.

Lesson 1: Configuring Server Storage

A variety of server storage solutions is available for corporate networks, and Windows Server 2008 R2 connects to these technologies in new ways. This lesson introduces you to the major server storage types and the tools built into Windows Server 2008 R2 you can use to manage them.

Capture1
Understanding Server Storage Technologies

As the demand for server storage has grown, so too has the number of new storage technologies. Over the years, the range of server storage options has broadened from simple direct-attached storage (DAS) to network-attached storage (NAS) and, most recently, to Fibre Channel (FC) and iSCSI SANs.
Direct-Attached Storage

DAS is storage attached to one server only. Examples of DAS solutions are a set of internal hard disks within a server or a rack-mounted RAID connected to a server through a SCSI or FC controller. The main feature of DAS is that it provides a single server with fast, block-based data access to storage directly through an internal or external bus. (Block-based, as opposed to file-based, means that data is moved in unformatted blocks rather than in formatted files.)
DAS is an affordable solution for servers that need good performance and do not need enormous amounts of storage. For example, DAS is often suitable for infrastructure servers, such as DNS, WINS and DHCP servers, and domain controllers. File servers and web servers can also run well on a server with DAS.

The main limitation of DAS is that it is directly accessible from a single server only, which leads to inefficient storage management. For example, Figure 2-1 shows a LAN in which all storage is attached directly to servers. Despite the web and App2 servers having excess storage, there is no easy way for these resources to be redeployed to either the Mail or App1 server, which need more storage space.

Capture2

The main tool used for managing DAS in Windows is the Disk Management console. This tool, which you can access in Server Manager, enables you to partition disks and format volume sets. You can also use the Diskpart.exe command-line utility to perform the same functions available in Disk Management and to perform additional functions as well.
Network-Attached Storage

NAS is self-contained storage that other servers and clients can easily access over the network. A NAS device or appliance is a preconfigured server that runs an operating system specifically designed for handling file services. The main advantage of NAS is that it is simple to implement and can provide a large amount of storage space to clients and servers on a LAN. The downside of NAS is that, because your servers and clients access a NAS device over the LAN as opposed to over a local bus, access to data is slower and file-based as opposed to block-based. NAS performance is, therefore, almost always slower than that of DAS.

Because of its features and limitations, NAS is often a good fit for file servers, web servers, and other servers that don’t need extremely fast access to data. In addition, NAS appliances come with their own management tools, which are typically web-based.

Figure 2-2 shows a network in which clients use a NAS appliance as a file server.

Capture3
Storage-Area Networks

SANs are high-performance networks dedicated to delivering block data between servers and storage subsystems. From the point of view of the operating system, SAN storage appears as if it were installed locally. The most important characteristic that distinguishes a SAN from DAS is that in a SAN, the storage is not restricted to one server but is, in fact, available to any of a number of servers. (SAN storage can be moved from server to server, but outside of clustered file system environments, it is not accessible by more than one server at a time.)

Capture4

A SAN is made up of special devices, including SAN network adapters, called HBAs, on the host servers, cables and switches that help route storage traffic, disk storage subsystems, and tape libraries. These hardware devices that connect servers and storage in a SAN are called the SAN fabric. All these devices are interconnected by fiber or copper. When connected to the fabric, the available storage is divided up into virtual partitions called logical unit numbers (LUNs), which then appear to servers as local disks.

SANs are designed to enable centralization of storage resources while eliminating the distance and connectivity limitations posed by DAS. For example, parallel SCSI bus architecture limits DAS to 16 devices at a maximum (including the controller) distance of 25 meters. Fibre Channel SANs extend this distance limitation to 10 km or more and enable an essentially unlimited number of devices to attach to the network. These advantages enable SANs to separate storage from individual servers and to pool unlimited storage on a network where that storage can be shared.

SANs are a good solution for servers that require fast access to very large amounts of data (especially block-based data). Such servers can include mail servers, backup servers, streaming media servers, application servers, and database servers. The use of SANs also enables efficient long-distance data replication, which is typically part of a disaster recovery (DR) solution.

Figure 2-3 illustrates a simple SAN.

Capture5

SANs generally occur in two varieties: Fibre Channel and iSCSI.
FIBRE CHANNEL SANS

Fibre Channel (FC) delivers high-performance block input/output (I/O) to storage devices. Based on serial SCSI, FC is the oldest and most widely adopted SAN interconnect technology. Unlike parallel SCSI devices, FC devices do not need to arbitrate (or contend) for a shared bus. Instead, FC uses special switches to transmit information between multiple servers and storage devices at the same time.

The main advantage of FC is that it is the most widely implemented SAN technology and has, at least until recently, offered the best performance. The disadvantages of FC technology are the cost of its hardware and the complexity of its implementation. Fibre Channel network components include server HBAs, cabling, and switches. All these components are specialized for FC, lack interoperability among vendors, are relatively expensive, and require special expertise.
ISCSI SANS

Internet SCSI (iSCSI) is an industry standard developed to enable transmission of SCSI block commands over an Ethernet network by using the TCP/IP protocol. Servers communicate with iSCSI devices through a locally installed software agent known as an iSCSI initiator. The iSCSI initiator executes requests and receives responses from an iSCSI target, which itself can be the end-node storage device or an intermediary device such as a switch. For iSCSI fabrics, the network also includes one or more Internet Storage Name Service (iSNS) servers that, much like DNS servers on a LAN, provide discoverability and zoning of SAN resources.

By relying on TCP/IP, iSCSI SANs take advantage of networking devices and expertise that are widely available, a fact that makes iSCSI SANs generally simpler and less expensive to implement than FC SANs.

Aside from lower cost and greater ease of implementation, other advantages of iSCSI over FC include:

* Connectivity over long distances Organizations distributed over wide areas might have a series of unlinked SAN islands that the current FC connectivity limitation of 10 km cannot bridge. (There are new means of extending Fibre Channel connectivity up to several hundred kilometers, but these methods are both complex and costly.) In contrast, iSCSI can connect SANs in distant offi ces by using in-place metropolitan area networks (MANs) and wide-area networks (WANs).
* Built-in security No security measures are built into the Fibre Channel protocol. Instead, security is implemented primarily through limiting physical access to the SAN. In contrast to FC, the Microsoft implementation of the iSCSI protocol provides security for devices on the network by using the Challenge Handshake Authentication Protocol (CHAP) for authentication, and the Internet Protocol security (IPsec) standard for encryption. Because these methods of securing communications already exist in Windows networks, they can be readily extended from LANs to SANs.

Capture6

The main disadvantage of an iSCSI SAN is that, unless it is built with dedicated (and expensive) 10-GB Ethernet cabling and switches, the I/O transfer of iSCSI is slower than an FC-based SAN can deliver. And if you do choose to use 10-GB equipment for your iSCSI SAN instead of the much more common choice of gigabit Ethernet, the high cost of such a 10-GB solution would eliminate the price advantage of iSCSI relative to FC.

Capture7
Configuring a SAN Connection with iSCSI Initiator

You can use the iSCSI Initiator built into Windows Server 2008 and Windows Server 2008 R2 to connect to an iSCSI SAN, configure the features of this iSCSI connection, and provision storage. To configure a SAN connection with iSCSI Initiator, select the tool from the Administrative Tools group in the Start menu. This step opens the Targets tab of the iSCSI Initiator Properties dialog box, as shown in Figure 2-4.

Capture8

To connect to an iSCSI SAN, specify an iSCSI target by name in the Target text box and then click Quick Connect. (Quick Connect is a new feature in Windows Server 2008 R2.) The Targets tab also provides access to Multipath I/O (MPIO) settings through the Devices and Connect buttons. MPIO enables you to configure multiple simultaneous connections to an iSCSI target so that if one adapter fails, another connection can continue processing I/O without any interruption of service. To enable MPIO, use the Add Features Wizard to add the Multipath I/O feature.

After you establish a connection to an iSCSI target, you can use the following tabs to configure the connection:

* Discovery On this tab, you can discover targets on specified portals and choose iSNS servers.
* Favorite Targets Use this tab to ensure that connections to selected iSCSI targets are restored every time the local computer restarts.
* Volumes And Devices This tab enables you to provision volumes and devices on targets and bind to them so they are readily available on system restart.
* RADIUS This tab enables you to specify a RADIUS server and shared secret for the authentication of the iSCSI connection.
* Configuration This tab enables you to require negotiation of the CHAP authentication protocol and IPsec encryption for all connections to the local iSCSI Initiator. The tab also provides a unique identification number for the iSCSI Initator, which you can specify on a remote iSCSI target to configure a connection to the local machine.

Capture9
Other Tools for Managing SANs

Windows Server 2008 and Windows Server 2008 R2 include the Virtual Disk service (VDS), an application programming interface (API) that enables FC and iSCSI SAN hardware vendors to expose disk subsystems and SAN hardware to administrative tools in Windows. When vendor hardware includes the VDS hardware provider, you can manage that hardware within Windows Server 2008 and Windows Server 2008 R2 by using iSCSI Initiator and other tools, such as Disk Management, Storage Manager for SANs (SMfS), Storage Explorer, or the command-line tool, DiskRAID.exe.

* Storage Manager for SANs SMfS is available in Windows Server 2008 and Windows Server 2008 R2 as a feature you can add by using the Add Features Wizard. You can use SMfS to manage SANs by provisioning disks, creating LUNs, and assigning LUNs to different servers in the SAN.
* Storage Explorer Storage Explorer is available by default in Windows Server 2008 and Windows Server 2008 R2 through the Administrative Tools program group. You can use Storage Explorer to display detailed information about servers connected to the SAN and about fabric components such as HBAs, FC switches, and iSCSI initiators and targets. You can also use Storage Explorer to perform administrative tasks on an iSCSI fabric.
* DiskRAID DiskRAID is a command-line tool that enables you to manage LUNs in a VDS-enabled hardware RAID.

Tuesday, July 26, 2011

Google to dump support for Microsoft’s IE7

Computerworld – Google will drop support for Microsoft’s Internet Explorer 7 (IE7) and Mozilla’s Firefox 3.5 browsers for its online apps, including Gmail and Docs.


Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com




“Beginning August 1, we’ll support the current and prior major release of Chrome, Firefox, Internet Explorer and Safari on a rolling basis,” said Venkat Panchapakesan, who heads Google’s enterprise engineering team, in a company blog Wednesday. “Each time a new version is released, we’ll begin supporting the update and stop supporting the third-oldest version.”

By that scheme, Google will stop supporting IE7, Firefox 3.5, Apple’s Safari 3 and its own Chrome 9, all which have released two newer versions.

IE7, for example, has been superseded by IE8 and IE9; the same goes for Firefox 3.5, which has been replaced by Firefox 3.6 and Firefox 4.

After Aug. 1, users running those browsers may have trouble with some features in Gmail, Google Calendar, Google Talk, Google Docs and Google Sites. At some point, those apps may stop working entirely.

“For Web applications to spring even farther ahead of traditional software, our teams need to make use of new capabilities available in modern browsers,” said Panchapakesan. “Older browsers just don’t have the chops to provide you with the same high-quality experience.”

Panchapakesan didn’t mention Opera Software’s Opera browser in his blog, an omission that prompted many users to leave comments.

“Lack of support for a browser as standards-compliant as Opera is absurd,” complained someone identified as “Isildur.”

Opera accounts for approximately 2% of all browsers, according to Web measurement company Net Applications, less than one-sixth the share of Chrome and less than one-third that of Safari.

The numerous “where’s Opera”-style comments prompted one wag to say, ” Wow, every existing Opera user left a comment here.”

By Net Applications’ statistics, the browsers Google will retire represent a minority of those in use.

Last month, IE7 accounted for 7% of all the browsers used worldwide, said Net Applications on Tuesday. Firefox 3.5 owned an even-smaller share of 1.4%, while Safari 3 accounted for only 0.1%. Altogether, the browsers destined for the dustbin controlled less than 9% of the browser market.

This was not the first time that Google has warned customers and users to upgrade to a newer browser. In January 2010, the search giant said it was dumping Google Docs support for IE6, the Microsoft browser that still accounts for 10.4% of all browsers in use.

Many IE6 users, however, are in China, where the government blocks access to Google’s online applications, and with which Google has a contentious relationship.

But while Google and others have stopped supporting the 10-year-old IE6, Google is one of the first online software vendors to drop 2006′s IE7 from a support list. Microsoft, for instance, has committed to supporting IE7 on Windows XP until April 2014, and on Vista for three years longer.

Panchapakesan urged people running one of the browsers on Google’s kill list to upgrade to a newer edition.

The end-of-support plan for Google Apps will not disrupt access to its search site using older browsers.

Monday, July 25, 2011

Microsoft expands Intune services

IDG News Service - Anticipating use of Intune by larger organizations, Microsoft is outfitting the managed desktop service with a number of new capabilities that should make its use more appealing in enterprise settings.


Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com

"The first version of Windows Intune didn't have all the capabilities that all of our on-premise products do, so that slowed down adoption by larger customers," said Alex Heaton, Microsoft director of Intune. "With this next release we will add significant new capabilities that will make it attractive to larger customers."

A new Intune beta service, launched Monday, will offer the ability for administrators to distribute and install third-party software across their systems, Heaton said. The console can also be locked down for read-only access, allowing junior administrators, partners and business analysts to access information without giving them full rights to make changes.

Designed for organizations with limited IT help, Intune is a Microsoft-hosted service that monitors and updates Windows 7-based desktop and laptop computers. Subscribers of this service can update any desktop PCs running Windows XP to Windows 7 at no additional cost.

Microsoft formally launched the Intune hosted service in March after a yearlong beta. With the service, the customer is provided with an Internet-accessible console, from which all of an organization's computers can be managed. From this console, an administrator can apply Windows updates and patches, monitor PCs, manage security, keep inventory of PCs and remotely administer an ailing PC. From its own data centers, Microsoft will queue the updates, as well as manage all the back-end server software needed for administration duties.

Heaton would not reveal how many customers Intune has, though he noted that the average customer has 250 PCs or less. Recent customers include the California Strawberry Commission and industrial real estate broker IDI, which manages 250 computers with the service.

With the new beta, he explained, Microsoft is beginning to assemble additional services that could make it appealing to larger organizations, those businesses with thousands of desktops. While the beta still doesn't have all the features larger organizations require, it paves the way for such an offering in the years to come.

"I wouldn't say this is the release to make us enterprise-ready. Our strategy is to do frequent releases until we get to parity of our on-premise products," Heaton said.

One of the new features is software distribution. The current version of Intune can store and deploy Windows patches. The new version can do the same for any Windows program compiled with the .exe or .msi suffix. This will allow administrators to upload a program once and then have it installed across all the machines.

Microsoft expands Intune services

IDG News Service - Anticipating use of Intune by larger organizations, Microsoft is outfitting the managed desktop service with a number of new capabilities that should make its use more appealing in enterprise settings.



Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com


"The first version of Windows Intune didn't have all the capabilities that all of our on-premise products do, so that slowed down adoption by larger customers," said Alex Heaton, Microsoft director of Intune. "With this next release we will add significant new capabilities that will make it attractive to larger customers."

A new Intune beta service, launched Monday, will offer the ability for administrators to distribute and install third-party software across their systems, Heaton said. The console can also be locked down for read-only access, allowing junior administrators, partners and business analysts to access information without giving them full rights to make changes.

Designed for organizations with limited IT help, Intune is a Microsoft-hosted service that monitors and updates Windows 7-based desktop and laptop computers. Subscribers of this service can update any desktop PCs running Windows XP to Windows 7 at no additional cost.

Microsoft formally launched the Intune hosted service in March after a yearlong beta. With the service, the customer is provided with an Internet-accessible console, from which all of an organization's computers can be managed. From this console, an administrator can apply Windows updates and patches, monitor PCs, manage security, keep inventory of PCs and remotely administer an ailing PC. From its own data centers, Microsoft will queue the updates, as well as manage all the back-end server software needed for administration duties.

Heaton would not reveal how many customers Intune has, though he noted that the average customer has 250 PCs or less. Recent customers include the California Strawberry Commission and industrial real estate broker IDI, which manages 250 computers with the service.

With the new beta, he explained, Microsoft is beginning to assemble additional services that could make it appealing to larger organizations, those businesses with thousands of desktops. While the beta still doesn't have all the features larger organizations require, it paves the way for such an offering in the years to come.

"I wouldn't say this is the release to make us enterprise-ready. Our strategy is to do frequent releases until we get to parity of our on-premise products," Heaton said.

One of the new features is software distribution. The current version of Intune can store and deploy Windows patches. The new version can do the same for any Windows program compiled with the .exe or .msi suffix. This will allow administrators to upload a program once and then have it installed across all the machines.

Monday, July 18, 2011

Microsoft plans 22 patches for Windows, Office next week

Sole critical bulletin will fix flaws only in Vista and Windows 7
Computerworld - Microsoft today said it will issue four security updates next week, only one of which is pegged as critical, to patch 22 vulnerabilities in Windows and Visio 2003.







Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com




Three of the four updates will address vulnerabilities in Windows, while the fourth will tackle problems in Microsoft Visio 2003, which was last patched in February.

The three updates that apply to Windows Vista will all patch bugs in Service Pack 1 (SP1), the edition set to head into retirement. Office XP, which will also be dropped from security support, has received its last fix: None of next week's four bulletins will affect that 10-year-old application suite.

And for the second month in a row, Microsoft's security updates likely won't lead the news.

"Last month more of the concerns were about the hacks of Sony Pictures and other sites," said Storms. "And it looks like other stories will take the cake this month."

Apple, for instance, faces a pair of "zero-day" vulnerabilities -- unpatched bugs that are already being exploited -- in the iOS mobile operating system that powers the iPhone and iPad.

"The focus for this month is not necessarily OSes and applications, but the constant stream of vulnerabilities being discovered in the mobile devices connected to our corporate networks," said Paul Henry, security and forensic analyst at Lumension, in an email today. "Microsoft does not have exclusivity when it comes to issuing patches."

Microsoft's security updates will be released at approximately 1 p.m. ET on July 12.

Sunday, July 17, 2011

Elgan: What I lost on the Google+ Diet II

After using only Google's new social network for a week -- forsaking all others -- here's what I learned

Computerworld - On July 8, I went on the Google+ Diet, using Google's new social network for all my online communication. As part of the diet, I stopped using Facebook, Twitter, Foursquare, and several other services. I even stopped using e-mail.





Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com


From a Google+ Diet perspective, the advantage of replacing your blog with a Google+ profile is that blogging happens in the same space as everything else you do. Personally, I love not having a gazillion windows and tabs going with all my social activity. I just say what I want to say, then choose who I want to say it to.

Most people have no interest in leaving Facebook

My Circles that are full of editorial colleagues and brilliant strangers are frenetic hives of activity. But in my "Family" and "Friends" Circles there is nothing but the sound of crickets.

I've tried to convince the people I care most about on Facebook to come sign up for Google+, but most have no interest. I believe they'll warm to it over time, but for now, it's clear that most Facebook fans are firmly embedded in that social network.
Google Plus

* Elgan: What I lost on the Google+ Diet
* Visual tour: 8 Google+ add-ons, extensions, and downloads
* With 10M users, Google+ is becoming a social competitor
* Google races to create business version of Google+
* Privacy, contact updates added to Google+
* Can Facebook and Google+ coexist?
* Google+ fervor may be making Facebook nervous
* Google to developers: Stay tuned for Google+ tools
* Google+ hit with spam bug
* Visual tour: 10 Google+ tips for beginners

Continuing coverage: Google+

You can post on Twitter and Facebook and send e-mail all from Google+

Replacing other communications media with Google+ doesn't mean they disappear elsewhere. There are multiple ways (browser plug-ins, RSS schemes and others) to have your Google+ posts appear or be linked to automatically on those other services. New apps and services are coming out every day that make this easy to do.

Google+ ends social networking fatigue, but can induce Google+ fatigue

Google makes it super easy to follow people, comment and interact on Google+. It also lacks Twitter- and Facebook-like limits on post size. As a result, it's easy to over-commit, and end up with a fire hose of information that leaves you exhausted.

The cure for Google+ fatigue is to constantly un-Circle the least interesting people, or the people who are using up too much time with stuff you don't like. It doesn't happen by itself, unlike on Facebook where EdgeRank limits your incoming feed without any action on the part of the user.

Google+'s system for friending and following is harder to understand, but better in practice

Twitter and Facebook are easy to understand when it comes to friending and following. On Twitter, you see the posts of the people you follow. Period. On Facebook, you see the posts of the people you have friended (it takes two), minus the posts blocked by EdgeRank.

But Google+ is both more complex and better. On Google+, you "follow" people by putting them into Circles -- say, one for "Friends," another for "Family" and another for "Co-workers." When you click on a "Stream" icon, you see the posts of all your Circles. Or you can choose the posts only of any one Circle.

Saturday, July 16, 2011

That's What He Said: Ballmer on the State of Microsoft

Steve Ballmer's WPC 2011 keynote turned into a lengthy state of the state of Microsoft, as he expounded on products like Skype, Windows Azure and Bing and how they will affect enterprises, partners and competitors. Here are some of Big Steve's most quotable fightin' words.




Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com



Bing

"Bing is probably the Microsoft product that partners spend the least time with, but I think that'll change -- we're thinking about developing an architecture that will open up Bing to be more of a platform."

"Market share for Bing in the U.S. grew to 14 percent this year, up 3 points. It's a 30 percent growth in number of search queries. In the past year, we've integrated in all of the Yahoo traffic, which means we are serving 30 percent of search queries in U.S. We went from 10 percent to 30 in one year."

Windows Server, Windows Azure and the public and private cloud

"It has been a big year for our private cloud with Windows Server and Hyper-V products and public cloud with Windows Azure ... In the last year Windows Server has built market share. Seventy-six percent of servers sold this year were Windows Server, and there's been equivalent progress with 40 percent of databases running on top of our SQL server database."

"Having a strategy that spans from public and private is a unique strength for Microsoft. Competitors like VMware, Oracle, Google and Amazon have offers that have merit but in a limited way. We think what you want is to have the flexibility to mix and match the public and private environments."

Microsoft Dynamics (CRM and ERP)

"This is the 10 year anniversary of entering business application space, and we've experienced 20 percent compound annual growth. Dynamics is now its own standalone division."

"Recently the LA public schools move 70,000 users to Microsoft Dynamics CRM."

"I've been asked: When does ERP in the cloud? Starting with Dynamics NAV early next year we will start putting ERP in the cloud."

Xbox Kinect (as a possible business tool)

"We brought our Xbox Kinect sensor product to market this year for the entertainment world and yet the amount of interest from businesses and partners in using Kinect in commercial applications is really high."

Windows 7 and Windows 8

"We're selling lots of Windows. We saw 350 million new PCs sold in last year, which compares to other guys [that would be Apple] that are in the 20 million range. 350, last time I checked, is a lot more than 20."

"We did a brief glimpse of Windows 8 at conferences a month or two ago. We made it clear that we are supporting ARM processor architectures in addition to Intel. Windows 8 will be a true reimagining of Windows PCs and the dawning of Windows slates."

Windows Division CFO and Corporate VP Tami Reller later announced that all Windows 7 hardware will be compatible with Windows 8.

Thursday, July 14, 2011

SharePoint Bible: Your Complete Guide to Microsoft's Collaboration Software

Updated: From pricing questions to enterprise and cloud adoption trends to reviews of SharePoint 2010, CIO.com's SharePoint Bible covers it all. Our guide delivers expert reviews, advice on planning and rollout, and news analysis on Microsoft's powerhouse content and collaboration platform.




Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com




The Boston Red Sox have many weapons to keep the team winning on the field: powerful hitters, a seasoned pitching rotation, and an experienced coaching staff.

But off the field — and in the data center — a key role player for the Sox has been SharePoint 2010.

At the SPTechCon SharePoint conference in Boston last week, Red Sox IT director Steve Conley stepped up to the plate for a Q&A with Microsoft's (MSFT) SharePoint Product Management Director Christian Finn.
Similar to this Article

* How We're Using SharePoint 2010 to Connect Our People
* Microsoft SharePoint: 5 Tips for Keeping Content Private
* Three Phases of SharePoint: What Type of User Are You?

Conley's IT group consists of six team members that serve 250 people in the Red Sox organization. For 81 home games a year, the IT group is "in charge of anything that has a plug," says Conley, with a laugh. "If you have to turn it on, I'm probably going to get called."

Boston Red Sox

Microsoft SharePoint

The exception, he adds, is Fenway Park's giant scoreboard in center field, which is managed by a third party.

One of the biggest advances the Red Sox have made recently is to revamp its intranet portal, named "Red Sox Central", using SharePoint 2010.

The team's intranet homepage includes features such as Webcasts of game highlights, photo galleries, a ticket dashboard for executives to manage their game tickets and weather widgets for Boston and Fort Myers, Fla., home of the Red Sox spring training camp.

Slideshow: 10 Things We Love About SharePoint 2010

Within Red Sox Central, Conley — a Red Sox employee since 2001 when they "still had typewriters" — wanted to configure SharePoint to solve problems such as requesting and allocating tickets, getting credentials from visitors, and paying and organizing invoices.

[ For complete coverage on Microsoft's SharePoint collaboration software -- including enterprise and cloud adoption trends and reviews of SharePoint 2010 -- see CIO.com's SharePoint Bible. ]

On the SPTechCon stage, Conley explained how moving to SharePoint in the past few years has liberated the Red Sox from of its tech dark days of scanning and e-mailing documents, snail mailing invoices and waiting days to hear back about ticket requests.
Ticket Request Application

Red Sox employees are allotted a certain number of tickets for the season. These ticket requests had to be made through the Red Sox ticket office via paper forms, a time-consuming process that could take anywhere from hours to days.

"We were able to automate all that with SharePoint using an online form for ticket request and acquisition."

As Conley expected, the online ticket request form quickly became, and has remained, the most popular section of Red Sox Central site for employees.

Wednesday, July 13, 2011

Microsoft: Buy Windows 7 today, keep same PC for Windows 8 upgrade

Microsoft may lower hardware requirements for Windows 8

Any computer running Windows 7 will be upgradable to Windows 8, Microsoft said today while pledging to keep hardware requirements level or even lower when the next version of Windows comes out.




Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com



IN PICTURES: The 11 hottest Windows and Android tablets unveiled at CES

Microsoft says it has sold more than 400 million Windows 7 licenses, but Windows XP is still nearly twice as commonly used worldwide. Yet Microsoft has already shown two technical previews of Windows 8, and announced today that a further preview of Windows 8 is coming in September. Therefore, Microsoft has a balancing act to convince businesses and consumers to upgrade to Windows 7 despite the promise of a new operating system around the corner.

"Two-thirds of business PCs are still on Windows XP. Moving these users to Windows 7 is important and urgent work for us to get after together," Tami Reller, corporate VP and CFO for Windows, said at Microsoft's Worldwide Partner Conference at the Staples Center in Los Angeles. The conference is Microsoft's opportunity to talk to partners about how they can make money together.

Windows 8 could be released next year, so to prevent businesses from holding on to their cash Microsoft is arguing that users should upgrade now and use the same PC to run Windows 8 later.

"Whether upgrading an existing PC or buying a new one, Windows will adapt to make the most of that hardware," Reller said. Windows 8 is for "the hundreds of millions of modern PCs that exist today and for the devices of tomorrow."

As we learned earlier this year, Windows 8 will be optimized for both touch-screen tablets and PCs. Microsoft announced at January's Consumer Electronics Show that it will support the ARM architecture, a lower-powered chip for mobile devices, and last month Microsoft showed off the new tablet interface.

"Windows 8 is a true re-imagining of Windows, from the chip to the interface," Reller said. Despite the re-imagining, Microsoft will keep system requirements flat or reduce them. To run Windows 7, PCs need at least a 1GHz processor, 1GB RAM, 16GB available disk space and DirectX 9 graphics.

Windows 7 tablets exist today, but regardless of Microsoft's advice, consumers are better served waiting for Windows 8 tablets to hit the market because they are likely to be more advanced and it's not yet clear whether Microsoft can create something better than Apple's iPad. The "buy today, upgrade later" advice should be applied to PCs only.

While a Windows 8 release date hasn't been revealed, Microsoft said today it will provide another technical preview at the BUILD Conference in Anaheim, Calif., Sept. 13-16.

The conference will "show modern hardware and software developers how to take advantage of the future of Windows," Reller said. "It is the first place to dive deep into the future of Windows."

Windows 8 will feature a start screen composed of applications represented in "tiles," which Microsoft believes are more useful than Apple's iPad icons because they are capable of providing details such as the current weather or state of an application. The traditional interface of Windows XP and Windows 7 will also be there for desktop-oriented applications.

Monday, July 11, 2011

How to survive a cloud outage

You can't prevent your cloud service provider from going down, but there are ways to protect yourself

For example, Web-based disk encryption vendor AlertBoot, headquartered in Las Vegas, used to pay $50,000 a month just for electricity, AlertBoot CEO Tim Maliyil says.



Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com


"We had two physical data centers at one point -- and you can't believe how happy we were to shut it down," he says. "Now, two clouds, bandwidth and hosting is $16,000 a month. There was so much waste of electricity and capacity. The cloud really minimized our costs and ongoing expenses."

Transitioning to cloud providers wasn't difficult, because AlertBoot was already using virtualization software from VMware in its traditional data center, he says. The two cloud providers the company picked are SunGard and OpSource, both of which use VMware technology as well. (Systems integrator Dimension Data announced recently that it plans to acquire OpSource.)

Switching from one cloud provider to another now takes just a minute or two, he says, and the backup cloud can ramp up quickly to handle increased load. The switch-over itself is handled by a service from Zeus Technology, a U.K. vendor that helps companies move applications from one cloud to another.

Maliyil said that his company selected these vendors because they are known for their enterprise-level reliability. "For the kind of business we're in, and our customers' [lack of] tolerance for failure, we've steered away from the Amazon infrastructure," he says.

Another vendor that helps companies manage services running on multiple clouds is rPath, which has more than 90 corporate customers, mostly large enterprises and ISPs, including ADM, Fujitsu, Qualcomm and EMC.

The company currently deploys to 16 types of image formats, which are snapshots of applications that run in cloud environments. Adding another cloud to the list typically takes less than a week, says Jake Sorofman, rPath's chief marketing officer. "It's fairly trivial for us."

The company currently supports Amazon EC2, VMware, Citrix Zen, Microsoft HyperV, Rackspace and several other formats. Once an application is in the rPath system, it takes as little as 15 minutes to generate a new image and deploy it to a new cloud, he says.

However, architecting an application for the rPath system in the first place can take a little longer. "The process of packaging a new application for our platform could take from a couple of hours to a couple of days, depending on its complexity," he says. "But we have a professional services team that does that work for customers if they choose."

Many applications are already packaged up, he says, including the full range of Windows and Linux operating systems, WebLogic and WebSphere, SAP, EMC and RSA products.

"There's a fairly extensive list of complete stacks that have already been modeled using our technology, and can be leveraged," he says.

And having the option to move applications between clouds does more than just provide backup options for companies, he says - it also allows companies to get the best possible deals from their providers.

"There is an arbitrage opportunity that comes with having choice," he says. "Being able to optimize where workloads are running based on performance, policy and price. And, to the extent that you can easily move a workload between Amazon, Rackspace or other environments, you have leverage over your service providers because you have eliminated lock-in."

Saturday, July 9, 2011

When Tech Helps Keep Money Clean

In this context, Synlog, a Globsyn company providing IT solutions to the banking and financial services domain, has launched RAFTS (Real-time AML Filter for Turbo Swift) software that tracks transactions on a real-time basis. While RAFTS has already been well received in the international market—where it is represented by Synlog’s business partner, BankServ Inc (USA)—it has only recently been customised for the Indian market by Synlog’s development centres in Kolkata and Chennai.


Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com

Seernani explains how RAFTS works: “Banks that are authorised to deal in foreign exchange (forex) do so electronically over the international SWIFT network. A connection to the SWIFT network is available through what is called SNL (SWIFT-NetLink), which is a combination of hardware and software. However, in order for the bank to be able to create messages (i.e., transfer requests) to be sent over the SWIFT network using the SNL, the bank needs message management software. The TurboSWIFT suite from BankServ is a popular message management software that has three essential components. TurboSWIFT is the main server that caters to all the users in a bank; TurboConnect is the client software that bank users have installed on their machines to log onto the server to perform their tasks (message creation, verification, authorisation, or system management); and TurboWEB is an Internet browser based front-end for TurboSWIFT that enables users across multiple bank branches to access the TurboSWIFT server (over a wide area network).”

RAFTS functions as an add-on ‘plug and play’ solution to the main TurboSWIFT interface. As an overlay, RAFTS monitors every incoming and outgoing forex transaction occurring over the SWIFT network. To do so, RAFTS uses the Bridgers List of words/patterns. The Bridgers List, issued and regularly updated by organisations in Australia, Canada, Europe, the UK, the US, the United Nations, and the World Bank, is a comprehensive listing of names and keywords that could spell trouble. For instance, it would contain words like Osama Bin Laden, Dawood Ibrahim, Al Qaeda, etc.

Dual protection: manual plus automatic screening

When RAFTS encounters any of these words in a message, it moves the message to a separate folder of suspect transactions, in queue for manual authorisation. This queue appears on the computer terminal of the AML officer of the bank, who may then allow the transaction to go ahead, or reject it and send it back to its creator.

Using RAFTS, an officer may also append patterns to the black-lists (the existence of this pattern means the transaction is to be stopped) and white-lists (the existence of this pattern indicates the transaction is to be allowed). Further, the officer may set the priority in which messages (such as letters of credit, advice of cheque, bank guarantee, etc) of various kinds are to be scanned, as well as define the message types to be scanned or passed without scanning. Essentially, while RAFTS enables a bank to adhere to the first cardinal rule to prevent money laundering—know your customer—it also necessitates a bank’s strict compliance with the second cardinal rule - know your employee—in appointing a completely trustworthy AML officer. And that, in turn, will go a long way in ensuring that funds are not misused, resulting in a safer world in the long run.

Thursday, July 7, 2011

Let's face it: HTML5 is no app dev panacea

Don't believe the hype: building serious applications still takes more than mere Web markup

2. HTML wasn't designed for applications
The buzz on HTML5 is that it's HTML souped up with improvements to support Web applications. But better app support wasn't always the direction of the HTML standard. Originally, the successor to XHTML 1.1 was going to be XHTML2, which would have emphasized semantic markup and integration with XML. True to its roots, XHTML2 was a document-centric markup language.







Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com




The XHTML2 effort foundered, however, and a splinter group called the Web Hypertext Application Technology Working Group (WHATWG) broke off from the W3C's HTML activity to begin work on a different draft of the standard, one that emphasized elements useful for Web applications. It was this work that eventually became the basis of what we now know as HTML5.

But was HTML5 really the best direction to go? HTML5's ballyhooed tag, for example, essentially means "insert a bunch of programmatically generated graphical content that can't be described by markup." That's a pretty strange use of a markup language. If we keep going down this road, are we not perhaps shoehorning Web standards into a role for which they were never really suited? It may be a necessity for the Web, but do we really want to put ourselves in the same bind everywhere else?

3. HTML sucks for building UIs
One of Apple's big innovations with the original Mac was publishing a detailed set of Human Interface Guidelines for developers. As a result, unlike DOS programs, Mac apps looked alike and behaved alike. They all used the same kind of menus, the same dialog boxes, and the same alerts. The resulting impression of coherence and consistency was a big reason why the Mac OS was so wildly successful, even when GUI desktops were still new and unfamiliar.

With Web apps, we're back to the DOS days. Interface designers are free to create any kind of buttons they want, have menus that slide down or pop up from anywhere, and generally paint the entire window any way they see fit. Without a standard set of widgets, apps built with Web technologies feel inconsistent and sometimes downright alien. Even if you go out of your way to build a UI that looks dead-on like a native iPhone app, the same UI won't fit in on an Android phone. Who's going to take the time to build Web-based apps that feel "native" on every platform? Nobody, that's who. (Let's not get started on the screen-size issue.)

Tuesday, July 5, 2011

Sorry, but the TDL botnet is not 'indestructible'

Malware and alarmism over its proliferation are nothing new -- and the latest boot-sector rootkit will be cured soon enough

The sophistication of the TDL rootkit and the global expanse of its botnet have many observers worried about the antimalware industry's ability to respond. Clearly, the TDL malware family is designed to be difficult to detect and remove. Several respected security researchers have gone so far as to say that the TDL botnet, composed of millions of TDL-infected PCs, is "practically indestructible."




Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com



As a 24-year veteran of the malware wars, I can safely tell you that no threat has appeared that the antimalware industry and OS vendors did not successfully respond to. It may take months or years to kill off something, but eventually the good guys get it right.

This isn't the first time we're supposed to be scared of MBR (master boot record)-infecting malware. In 1987, well before the days of the Internet, the Stoned boot virus infected millions of PCs around the world. Subsequent "improvements" in hacking allowed malware authors to create DOS viruses that could manipulate the operating system to hide themselves from prying eyes. (Actually, the first IBM PC virus, Pakistani Brain did this in 1986, too.) Computer viruses became encrypted and polymorphic, and they started taking data hostage.

With each ratcheting iteration of new malware offense, you had analysts and doomsayers predicting this or that particular malware program would be difficult to impossible to defend against. But each time the antimalware industry and other software vendors responded to defang the latest threat. Yesterday's indestructible virus became tomorrow's historical footnote.

Even today's malware masterpiece, Stuxnet -- as perfect as it is for its intended military job -- could be neutralized if it became superpopular. Luckily, military-grade worms are few and far between, so most users don't have to suffer while waiting for defenses to be developed.

The truth is, like every other malware family variant, TDL and its botnet will probably be around for years to exploit millions of additional PCs. But it didn't take an advanced superbot to do that. Take a look at any monthly WildList tally. It always contains malware programs written years ago.

Today, almost every malware program lives in perpetuity, dying off only when the exploited program or process dies with it. Boot viruses from the 1980s and 1990s didn't stop being a threat until floppy disks and disk drives went away. Macro viruses didn't die until people stopped writing macros and Microsoft Office disabled automacros by default.

No, what really bothers me more are the malware programs that do something completely new because it takes so much longer for antimalware programs, software vendors, and users to adapt to the tactic. For instance, it took us years to teach folks not to open every file attachment to defeat email viruses and worms -- but it takes the bad guys only a few minutes to change strategies. Today, we need to tell folks not to click on the Internet link emailed to them by a trusted friend and not to install random applications sent to them in Facebook or through their mobile phone.

But our biggest threat is an MBR PC-infector? Been there, done that.

Monday, July 4, 2011

The 10 worst cloud outages (and what we can learn from them)

Sending your IT business to the cloud comes with risk, as those affected by these 10 colossal cloud outages can attest

Think you have to be a Netflix-size business to stay safe? Think again. Twilio, a company that helps developers integrate communications into their Web apps, uses Amazon's EC2 to host the core of its infrastructure -- yet April's outage had little to no impact on its stability.


Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com


"The fundamental premise of building on the cloud is assuming that the network will have glitches," says Evan Cooke, Twilio's co-founder and chief technology officer. "We built an infrastructure around the idea that a host can and will fail, so we don't rely on any single machine or single component in the core architecture itself."

Colossal cloud outage No. 2: The Sidekick shutdown. Smartphones make it easy to access your data on the go, but just because something has "smart" in its name doesn't mean it can't be dumb. Case in point: the T-Mobile Sidekick screwup, circa fall 2009.

Remember this fiasco? The Microsoft-owned Sidekick suffered a nearly week-long service outage that left users without access to email, calendar info, and other personal data. Then, adding insult to injury, Microsoft confessed it had completely lost the cloud-stored bits and wouldn't be able to restore them. Evidently, the good ol' gang from Redmond had forgotten to make backups.

The technology may have evolved since then, but the lesson remains the same: When it comes to crucial data, never assume someone else is automatically protecting you. Make sure you understand your cloud provider's disaster recovery setup -- better yet, make your own arrangements to back up your important data independently.

"The same operational rules apply even in the cloud," says Ken Godskind, vice president of monitoring products for AlertSite, a SmartBear company. "Organizations using the cloud can't just assume that because it's in the cloud, all the responsibility for business continuity planning has somehow been transferred to the provider."

Colossal cloud outage No. 3: Gmail fail. Of all cloud services, Google's Gmail presents one of the more likely threats to Microsoft's on-premises stranglehold on the enterprise. Replace your high-maintenance Exchange servers with a cheap, dependable email service backed by Postini. What's not to like?

A rash of irksome outages, the most recent of which had 150,000 Gmail users signing into their accounts only to find blank slates -- no emails, no folders, nothing that indicated they were actually looking at their own inboxes. To Google's credit, it provided regular updates and promised a quick fix. But repairs took as long as four days for some of the affected users.

"How could this happen if we have multiple copies of your data, in multiple data centers?" Google vice president of engineering Ben Treynor asked in a blog posted at the time. "In some rare instances, software bugs can affect several copies of the data. That's what happened here."

Google ended up having to turn to actual physical tape backups in order to restore the data. Ultimately, the company's multilayered data protection did work, but not without leaving thousands of users locked out of their email for days.