Thursday, September 16, 2010
Tuesday, August 31, 2010
Wednesday, August 25, 2010
• Organizing phases into 2-4 week iterations (Sprints) so that there is distinct product or deliverable assigned to each iteration.
• Performing User Acceptance Testing (UAT) early in the system life cycle, rather than the end of the SDLC, to gather business user feedback early and often.
• Designing the system with the expectation of change. This allows agility and flexibility. I assume that the services we build will need to be dynamic with the changing business climate and will need rapid.
• Maintaining an On-going "Product Backlog" which is ranked and prioritized list of requirements. The list is constantly re-ranked and prioritized and the top candidates are inputs into the next Sprint iteration.
• Brief daily Status meetings (Scrum) to check progress, roadblocks, and planned activities.
• Collaboration amongst the team and amongst the business team is key to vetting solid and practical frameworks
• Keep it Simple. Making governance frameworks and patterns overly complex runs the risk of limiting adoption
• Business Processes are the foundation of what applications have as an objective, and in Agile the business stakeholders are a key to driving the development process (having a Product Owner).
• Governance requires Multi-stakeholder team, consisting of representation from various IT and business teams.
• The use of a Burn Down Chart can help all stakeholders track progress of the overall initiatives.
• Rapid. Using Agile will force teams to get working services and deliverables in a quicker fashion and forces accountability across the team.. This also has a good impact to team morale (developers tend to enjoy the Agile process)
• Good for creating a Knowledge base to capture "lessons learned" early and often for continuous improvement.
• Agile reduces waste; Captures bugs early, avoids goldplating, committed team members
Some of the more radical Agile principles that I don't necessarily prescribe or apply include:
• Limitation of documentation. In projects, especially around governance, certain documentation is key such as patterns, SLA's, contracts, frameworks, etc
• Self-organizing teams. I feel that leaving the teams to self-organize can be a bit optimistic and instead I feel a project sponsor is best suited to help organize the team, with our strategic input on personnel skills and capabilities.
• Documenting User Stories (Requirements) on stick notes. While this is good for collaborative working group sessions, I find it is very important to electronically capture and publish the User Stories for all team members to view through an online, browser enabled tool
• Individual iterations over processes and tools. I feel its important to follow a disciplined process and using the right tools to build services and applications is important as well.
• Not following a plan. I feel it is important to have a high level Roadmap for tackling projects, and maturity model that outlines milestones to achieve adoption and capability achievement.
Thursday, August 19, 2010
Both of these principles are reliant on virtualization, the ability to run multiple, independent instances of software or hardware within a resource that was originally designed for a single use. The best example is a Server hosting multiple, independent operating systems that each are performing seperatlely from each other.
The leading vendor in the virtualization space is clearly VmWare. VmWare has been around the longest, and has the greatest market footprint, and has the most efficient use of hypervisor. Challengers in virtulaization space include Oracle, Microsoft, and few others.
The problem with VmWare, or the elephant in the room to speak of, is the # of enterprise software products that are still not supported on VmWare, namely Oracle. The largest example is Oracle, since they are the leader in enterprise software. Oracle software is only supported on Oracle's own product-- Oracle VM. Now, I do know there are a lot of customers running Oracle databases and Oracle middleware on VmWare and haven't had any issues yet. But, if there is an issue, these customers must understand their environment configured with VmWare is not supported by Oracle, Inc. You will be required to reproduce your issue in a non-VmWare environment OR on Oracle's VM software to get bug and issue support. This is scary, especially the number of customers I know who run Oracle on VmWare. Just the sheer possibility of losing production data or having long system downtime due to a non support issue, and then having to reproduce the entire environment to get support is enough risk for me take a strong look at Oracle VM so not to impact my Production systems. However, I'm no dummy and realize a lot of customers are running on VmWare just fine and haven't seen any issues, yet. I'd ask how advanced or complex their environement is? Are they doing RAC, Clustering, Load Balancing, Data Replication, or other advanced configurations? All of these could add complexity and impact the environment on a virtualization architecture. High performing applications that require this level of configuration could be risky in a VmWare environment, especially since VmWare does its own version of memory management, throwing off software like Oracle that manages its own SGA and PGA structures. Huge considerations for any customer thinking about virutalization-- lack of vendor support is serious stuff even if you think it works ok.
A second consideration is cost savings. This is one of the main drivers for virtualization. Squeeze more out of my existing resources instead of using it for a single purpose. For example, if I buy a physical server and its CPU and memory is very underutilized, then I can virtualize more Operating Systems onto the server and use the server for multiple purposes. This is good for hardware savings and not having to procure more hardware for your software applications, but won't buy you anything with your software licensing savings. the large software companies are very aware of this, and they will not give you a break on your software for putting more on a single resource. This is why they will not allow tools like VmWare to emulate the CPU's to make the customer's licensing less expensive.
I know VmWare has lots of examples where software on their product runs issue-free. This is great and I applaud the fact that it should work ok in a basic configured environement. However, VmWare cannot control the other software vendors like Oracle. I would ask to please get Oracle products officially supported on VmWare, and then we can all rest easier at night.
So, Big Elephant in the room-- to VmWare or not VmWare? No virtualziation, clearly makes a "Cloud Environment" difficult to achieve, especially losing multitenancy and elasticity principles. So, your first option is to ask your vendor what their policy is on virtualization support. If its a company like Oracle, consider using their Oracle VM product if you still need virutalization. VmWare may be your corporate standard for virtualization, but its not supported, and that is enough risk to avoid it until it is fully supported.
Monday, July 5, 2010
I started off the conference by attending the SOA sessions. Logic would tell me to go to other sessions-- ones that I was unfamiliar with, so I could learn more about areas I wasn't as entrenched in. But, I threw logic out the window, and had to start where my heart is and see what other practitioners have to say about their SOA experiences.
The SOA sessions were interesting to sit through, but I was really surprised and disappointed that these SOA sessions were poorly attended. At a conference that had over 1,000 attendees, the SOA sessions struggled to get 10 attendees per session, while other sessions were "standing room only". The few folks who did attend, were trying to learn SOA 101 and how to get started with SOA at their respective organizations.
Despite the low attendance and interest at the SOA sessions, I still took it upon myself to preach SOA to attendees I networked with throughout the entire conference-- whether it was lunch, booth conversation, breaks, or even evening social hour, I spoke the SOA gospel. I quickly realized that most folks I encountered, struggled with the fundamentals of SOA and the value proposition it brings to their architecture. The 2 main excuses I heard from attendees about not embarking on SOA were 1) SOA is overkill for their organization (they don't think they need it) 2) SOA adds more complexity to the architecture and environment and the last thing they need is more complexity. Through questioning and examples, I tried to prove that both these "excuses" were incorrect. However, when your at a conference and only have a few minutes to get your point across to someone you just met for the first time, it's difficult to fundamentally change people's minds. At a minimum, I am confident I planted a few deep seeds for folks to think about or read more about SOA when they return back to their organizations.
It seems SOA evangelism is an uphill battle. Most folks I meet are non-believers when I meet them and they need to be converted to SOA. They say IT beliefs are like a religion, so we know conversion can be a difficult endeavor. I am ok with this responsibility and wholehearteldy accept it. My preferred approach towards gaining SOA acceptance is to engage and prove value through a Proof of Concept. I feel this is the best way to show SOA benefits. It's also important to understand that SOA truly is a paradigm shift-- a new way of thinking in organizations. With SOA, applications and systems aren't managed as the primary asset; rather, the service is. A lot of folks struggle with this concept, especially since they have been working in a single paradigm and only understand systems and applications throughout their careers.
What's the solution to overcoming this uphill battle? Continuous education, evangelism, and making sure demonstratable value can be achieved through proof of concepts and prototypes. Show your colleagues the money! IT needs to accept SOA before you can bring it to the business teams, so make sure you don't ignore the developers in your organization-- they are important stakeholders that need to be part of the SOA Journey from the beginning stages. Show them the light, and live by the motto "If you build it, they will come"!
Monday, June 21, 2010
-- Dynamic Endpoint Selection is probably most applicable with external hosted services, outside your corporate firewall. For example, if I need to get a stock quote or a weather service, I probably have less concern about who the Provider is, as long as my Quality of Service (QoS) needs are met. Just get me the info! However, internally hosted services won't have backups that applicable to my business. Getting customer or Supply Chain information that is specific to my business is impossible to replicate externally, or is just questionable architecture if it is redundant internally (putting failover and D/R aside).
-- This could work well in a B2B scenario or for a multi-agency government scenario. This is better known as a Community Cloud. If one provider can not provide a consumer the information needed at the time the consumer needs it, another provider can "step in" and fullfill the request. In a community, a lot of the participants have access to same or similiar information sources.
-- This could be really nice to automate a B2C retail online transaction. I don't need to know who the merchant is selling the product, just the price and terms. In Amazon, we get presented with a checkout to a specific merchant. What if I don't ever need to know who the merchant is until after purchase (like hotwire.com). Rather, a service does the dynamic selection. This would assume vendors have connected their inventory systems into the marketplace for an automated inventory check before purchase order completes. Still, today's buyer is normally accustomed to know who they are buying a product from before the committ to purchase. Part of this is brand identify, part educated consumer, and part skepticism with all the horror stories of purchasing products over the Internet.
-- Creating the general interface contract and getting proper provider compliance can be cumbersome. Does the end justify the means? How many consumers will use this approach, if the investment is put up from the providers?
Friday, June 11, 2010
Regarding Sarbanes, first, there has to be an understanding that data ownership and data control are different responsibilities and capabilities assigned to a Provider-to-Consumer relationship. Without question, a cloud consumer should have full data ownership, but definitely check your provider's contract to be sure of this. You own the data and the intellectual property tied to the data because you created it and did not assign it to the provider. Data Control is a little different. You would like to have full accessibility and control as a cloud consumer, but certainly negotiate that with your Cloud provider on what control you do get. You might not get full control from the provider, but you should have something close to this, in case you need to react quickly to an issue or new demand. To pass some of these Sarbanes regulations, ask your Cloud Provider if they have passed a SAS 70 audit. It is important that they have. This type of audit is performed by an independent audit firm, and verifies the provider has proper IT controls in place. It is a Federal Regulation But, also ask your cloud provider for the details of the Type II SAS 70 report, so you can read through the actual descriptive items addressed in the report outcome. This report will have User Defined Controls and tells how well the provider is adhering to these controls. An example of a User Defined Control would be if the cloud provider fires the Administrator on your account, they need to immediately be removed from having access to your cloud. That is an example of a User Defined Control-- accessibility to the information source during employee de-provisioning. With these SAS 70 reports in place, your cloud provider should be Sarbanes compliant !
Regarding PCI, which involves governing controls of sensitive information such as credit card numbers and its associated information, the industry standards are less mature. There is a PCI Compliance certification, and it is a good thing to ensure you cloud provider has done such a certification. However, it is not on the same level as a Sarbanes SAS 70 audit, because it is not always through a 3rd-party and it is not a Federal Regualtion. There have been too many breaches of PCI Compliant systems to claim this as a regulation, even though some states are starting to pass laws around PCI protection. So, its a "nice to have" certification, but not a "requirement" like SAS 70 is. This is because its not an industry standard and technically doesn't validate the provider as passing an independent audit. STill, I highly recommend having it done if you are going to do PCI in the cloud, just don't rest your laurels on it. To further beyond the certification, you need to discuss with your provider how they are doing encryption, security, data privacy, data masking, data protection (virtual and physical) and so forth to ensure the PCI data is well protected to the highest level of trust.
The big thing for cloud consumers to remember is to do your homework on your provider. Read their contracts, negotiate their contracts, have your legal read the contracts, review SLA's, and just ensure you are well protected from catostrophes. This is where the rubber meets the road to ensure you are protected should an issue arise-- make sure the contract is bullet proof. Think of your cloud contract as a prenuptual agreement-- what happens when things wrong and how do the parties react? There has to be clear recourse and comittments. Putting this together, will help everyone sleep better at night.
Thursday, June 3, 2010
So, who gets EA-- the CIO or VP of a Business? I argue neither! After all, a typical EA goal is to connect the Business and IT together to impart better structure and visibility across the enterprise. I firmly believe that neither should own EA so that neither imparts too much of their organization (i.e bias) on the EA process and deliverables. EA needs to be independent, and it's for all the right reasons.
Companies need to seriously consider organizationally aligning EA into a group that is independent of both IT and Business. The easiest way to do this is to let the COO own EA and let his group facilitate the collaboration between IT, Business, and EA group too. The COO already has been assigned corporate responsibility for governance, operations, company performance, prioritizing organizational requirements. This sounds like a natural fit for Enterprise Architecture to me.
Wait, you don't have a COO? Now's the time to create one! If that's a tough sell to your CEO, then I still recommend keeping EA outside the groups its supposed to connect, namely IT and Business. You could do a stop gap an align EA in IT department like most organizations do today...but you've now lost your independence, and even worse, credibility with the business. There's already enough distrust, why create more? With such an approach it's really easy to impart a bias, even worse a political opinion or even a resentment too. Anyone whose worked in corporate world knows this all too well. EA is chartered to break-down these silo's, tear down walls, and build bridges across the organization. With the wrong organizational alignment, it could be the cause for divide. We don't want EA to be the corporate joke punchline, and the only way to prevent this is keep the EA team at arms length, by putting it in a separate team, with separate alignment. And, this applies to SOA as well. After all, the SOA and EA team should already be in the same team, a topic I'll address in another blog entry.
Wednesday, May 26, 2010
Tuesday, May 11, 2010
• More Offsite (and offshore) consulting
• Shorter and less expensive IT and consulting projects
• More pre-built deliverables such as software applications
• Commoditization of IT in general
Could this be true? Will we all be working remotely to deliver our client projects going forward? Maybe someday, but not anytime soon. Sure there will be projects that fall perfectly inline for Cloudsourcing, such as Small/Medium size businesses who loathe infrastructure, software firms who are well organized to hire and on-board offshore, and high-tech companies who have already accomplished manufacturing outsourcing.
But,I challenge that there is still and always will be a strong need for more soft skills then hard, more white collar than blue, and more human elements that can never be replaced. Now, I'm not blind and I clearly see there will continue to be a push to off-shore more IT labor to save costs, and I think this works well when projects are in well-defined “Development” phases that software engineers can work remotely, effectively. However, here are reasons I believe Cloudsourcing will be a slower adoption than some are predicting:
• There is still too much confusion with Cloud Computing among IT departments. Face it, there are very few cloud pioneers, and most organizations are taking a “wait and see” approach. Most of the Fortune 2,000 and Federal government agencies, who are the ones who spend the most consulting dollars, haven’t jumped onto the Cloud bandwagon quite yet. Although a lot are investigating and interested in Cloud because they know this is the future of computing, they haven’t committed yet and probably won’t for a couple more years. Some say the pioneers are the ones with arrows in their backs, and this is why a lot of CIO's are letting their peers forge into the Cloud before they are. Cloud is inevitable, it is certainly they direction our industry is moving; it's just moving a little slower than some predicted.
• It’s hard to see eye to eye when you can’t see face to face. I believe it was Hilton Hotel’s marketing program that launched that slogan. And it’s too often true. There is too much human element in IT projects that cannot be accomplished through teleconferences, online collaboration, email, or other non-human mediums. It reminds me of the "Jay Cutler Conference Call Fiasco" that any Bronco's fan remembers. You have to meet people in person, and that is why there are so many consulting road warriors out there.
• Think about it…how many of these initiatives can be successfully completed without face to face meetings: Requirements Management, Project Management, Enterprise Architecture, Governance, Technology Insertion, Portfolio Management, Program Management, Communication Management
• Off shoring of Operations, Maintenance, Sustainability, Support, and Administration to me makes a lot of sense. However, off shoring innovation, business requirements, prototypes, and new ideas to me seems risky. I’ve always claimed off shoring is a delicate balance of quality vs. cost and anyone who has been on an offshore team knows how well the product or deliverable needs to be specified before handing over to the offshore team to develop it. Also, I've experienced off-shoring may be cheaper, but it's also slower so anything that is requiring rapid market penetration, flexibility to change on the fly, or time sensitive may not be the best candidate for this model.
Ultimately, we will see minor shifts to more cloudsourcing type models, but certainly no wholesale shifts. Small and Medium sized businesses are prime candidates and are already beginning to embrace this model. It makes sense for them. It just doesn't make sense for the typical Fortune 2000 corporate culture, especially for their strategic initiatives. Don't get me wrong-- I'd like to see the model work and spend less time in airports myself, I just don't think corporate culture is ready for such a monumental shift in consulting models anytime soon.
Monday, May 10, 2010
· Coach along projects you know are coming down the Pipeline instead of turning them back to the drawing board when it’s time to review them for acceptance (and therefore too late to help them…).
· Help seed projects from the early stages by proving mentoring at the beginning, not the end. In other words, become a venture capitalist of your organization by helping new software ideas align to governance early! Invest in the next enterprise “start-up”!
· Follow Agile Principles—do things earlier, not later in the lifecycle.
· Institute a Software Mentorship programs to benefit the organization. Being a champion without sharing your secrets to success is selfish. Help the greater good of the enterprise.
· Put governance in the backseat during the early stages of a pilot project. Help get prototypes off the ground by marginalizing governance (for the moment…). Some of the best innovations had to bend (or break) the rules. Be a game changer if you have to! Governance can always to adapted and applied at a later stage (not too late...), but don’t sacrifice innovation for indoctrination.
· Create a culture of excitement, encouragement, and positive attitude. If others think meeting with you is going to the Principal’s office that culture will limit and intimidate the organization. Nothing new will arise and instead creates a cultural bottleneck to the next “Big Idea”. Don’t be a bottleneck to new ideas…
Monday, March 29, 2010
and it was a nice article, but I politely disagree with Zman that "architecture is not arbitrary".
Lock 10 architects in 10 separate rooms; provide them all an identical copy of the same business, technical, process, and system requirements; have them design an architecture under the same rules and perspectives; and I guarantee your result will be 10 different architectures of varying degrees. Maybe my opinion is biased because I come from a Software background, but I often think Enterprise Architecture is an Art that is trying to apply a Science. No 2 architectures are identical. No 2 interpretations of how an Architecture should look like are identical. No 2 Architects think alike. Often times, Architecture is the art of compromise because rarely will you get to 2 architects to agree on the final architecture. Compromise is really hard for us, because we are traditionally very stubborn people! I've met many an architect whose ego is bigger than the Internet, and thinks he could teach a thing or 2 to Socrates. We don't like to be told we are wrong, especially when we develop a "work of art".
Maybe a better statement can be "architecture tries to make the design non arbitrary". With good architecture principles, patterns, frameworks, rules, constraints, standards, policies, procedures, and approach, the design becomes a simplistic exercise with little left to judgement, error, and becomes more a commoditized task. This is what I think architecture truly strives for-- making everything downstream trivial. Architecture lays the foundation for the remaining pieces to snap in very easily without much variance. Success is when the blueprint is followed according to plan!
The article continues to articulate how industry standards have played a role in making architecture non-arbitrary. This makes sense in certain vertical industries cited in the article such as airplane manufacturing, nuclear power plants, developing the Space Shuttle. However, it doesn't make sense in many commercial corporations whose architecture is centered and largely dependent on enterprise software .
Think about it-- Building an airplanes for Boeing or a nuclear reactor has very strict standards, specifications, and processes with little to no variance. However, building software has tons of variance and therefore industry standards are rarely adopted religiously in software implementation (unless you are in some thees industries mentioned above). There are some very simple reasons for this dynamic:
- Qualifications for building software are low. Low barrier to entry, commoditized work force, easy to learn software programming skills.
- QA requirements are much lower. Often times, it's "just enough QA" and many corners are cut and sacrificed so not to slow down the market plan.
- Time to Market patience is low. Software is expected to get deployed before its truly ready (and bugs are well known). Rapid is the name of the game, especially in today's economy.
Sunday, March 28, 2010
Monday, March 22, 2010
SOA is premised on the philosophy of "designing for change". The goal is that when business requirements change, IT can rapidly make adjustments to support the business demands because the IT systems are designed, documented, and impact transparent to allow IT to quickly adapt to new requirements. Businesses are always changing-- new business opportunities, new channels (see "Internet"), new partnerships, Mergers and Acquisitions, or in this case NEW REGULATION. The challenge is IT can't keep up with their current architecture and IT envirnements, let alone support all this change! Unfortunately, the whole company suffers from lack of agility! I would love to see a poll of Fortune 2,000 companies on how quickly their IT departments were able to adapt and change when Sarbanes Oxley regulations were enforced upon them? My bet is this would be measured in years, not months, and there are still quite a few companies struggling to adopt Sarbanes today.
Now that we know businesses will need to comply with new health care reporting, financing, personnel, operations, and taxes, how many are equipped to comply with the new reform? Will this be Sarbanes Part 2 and will IT departments be scrambling to change their tightly interwoven legacy systems, CIO's hiring more business analysts for more swivel chair integration, companies being fined for lack of compliance, or even worse making the front page of the Newspaper? How about more regulation coming down the pipe? If there is one constant we do know, is that the business will always change. Regulation is never a 1 and done initiative.
There will always be new regulations enacted on companies, so I advocate that businesses bite the bullet now and invest in agility by considering how SOA approaches can help them solve these problems, limit their impact from required changes, and prevent risk. The payoff from SOA investment will easily be achieved in this case, simply by reducing the cost of regulation compliance (lower resource cost) and "future proofing" the organiztion for adapting to new regulations on the horizon. Let's not forget, SOA-driven agility also opens doors for new revenue opportunities too-- another topic for another day. The early bird gets the worm, so be proactive instead of reactive by architecting your IT environment to meet these important business needs!