Translate

Thursday, November 6, 2014

Paris OpenStack Summit Recap


While I have been involved with OpenStack in various ways for the last couple of years, the OpenStack Summit in Paris has been my first and it has been a great experience. Throughout my career I have attended many vendor conferences like Cisco Live, VMworld and others, industry events (CeBIT, Supercom, …) and I have also attended FOSDEM in the past. The OpenStack was a mix between an industry event, a vendor conference and FOSDEM.

The organisation was great, similar to what you get on large vendor events. The audience was certainly (a lot) more technical and with a lot more presence of developers than most vendor conferences (I have not attended developer conferences …), but clearly there's also a marketing spirit all around the Summit that is very different from the FOSDEM experience. But that said, the event shows the interest on open collaboration.

Considering this event is happening every six months and considering it is an open source focused conference, the volume and level of attendance is impressive. I believe there's been more than 4,600 attendees from 60 different countries.



The keynotes were interesting and there's been many good sessions. I believe most were recorded and the videos can be found on line here.

Learning Experience

There are a lot of sessions, really a lot. This means you have to make  hard choices in terms of which ones you attend, because no sessions are repeated and there are many overlapping topics. I believe the session abstracts could be a bit better some times as well ... As explained above, most were recorded so you can also check those you missed, but of course without the opportunity to ask questions, interact, etc.

So what about OpenStack? …  I have been a fan and an OpenStack "aficionado" for a while. Already a year ago it was clear that the maturity level was reaching a point that indicated that OpenStack becoming mainstream technology was not a dream.

At this Summit, we have seen about some large scale deployments by some (very) large Enterprises including BMW, BBVA, Tapjoy, CERN, SAP, Expedia and others. Impressive. We are talking of different use cases, different industries, different scale levels, but all successfully running OpenStack.

CERN is currently running 70,000 cores on OpenStack (I believe on RDO if I got it right), with plans to double that in the near future. The keynote presentation by Tim Bell was great. I strongly recommend watching it, not just from an OpenStack point of view but to learn more about CERN and what they do.

By all means, OpenStack is a real challenger to legacy virtualisation and cloud management proprietary vendor solutions. In fact, perhaps not a challenger, but many challengers … because there are various mature options for "consuming" OpenStack already. Whether it is as a product (Piston), a distro (RDO, HP ...), or even as a service (MetaCloud).

Challenges?

Sure. Many. This is one of the areas where the OpenStack Summit clearly differs from vendor events and trade shows. Folk here shares as much of the problems as they share their success.

I think it is fair to say that unless you know very well what you are doing and/or you have a small army of Openstack savvy engineers, you want to have a vendor backing you up (read list above). But, how is this different from implementing a cloud solution using proprietary software? ... It's not like you can download the vendor's code and start building your cloud right?

OpenStack challenges seem to be mostly coming from the very nature of the framework: a loose collection of components that work together but are built to be independent. So you need to pay attention to tuning things like your backend databases, message queue systems, understand how you provide HA for various components, etc.

Perhaps because of my background, or perhaps because of reality, but networking appears to be the most challenging part. It is the one area where the "native" OpenStack solutions are not really up to speed and you need to rely on some vendor option.

There's still a lot of discussion about whether overlays and/or full SDN solutions (where the physical fabric is orchestrated to provide virtualization services) is the best choice. There was a great panel about it.

One of the interesting points that came out of that panel is that L2 overlays and virtualization solutions are in fact looking to solve problems that come out of IPv4 limitations. However we should be thinking about IPv6 as the long term solution. I think this is definitely true and worth exploring. Much of what has been done with overlays is completely unnecessary if we consider IPv6.

Another thing comes clear is that much work is still required on OVS, both in terms of performance and stability.

Conclusions?

After interacting with multiple people, including developers, customers, vendors … there is a clear interest in moving away from expensive proprietary software stacks for implementing private and public clouds.

Many speakers (including some on several of the keynotes) talked about OpenStack providing a way for achieving freedom of choice, not just of hardware, but most importantly of software components as well. During the keynote on Monday, the Dr. Stefan Lenz from BMW in particular was very explicit to call out the issues of the past, when they were developing around a proprietary automation software - effectively locking themselves on to it - to later experience increasing licenses fees that you couldn't turn away from (because you were locked in).

Contrary to the mantra around … software lock-in is usually the most expensive one!

Final thought ...

The messaging on the keynotes was, understandably, highlighting the growing relevance of software in the economy and  in the world in general. Of course, the open source development model is presented in this context as the natural way of doing things in this new world. Pretty much, the message is that OpenSource will rule the world. A slide presented that in the near future, most solutions will likely use an 80/20 rule with 80% of code being open source.

So, considering that companies still need to create value, what is the way to differentiate on an open source dominated world? It's that 20% of value-added code enough? Either that, ... or the hardware where Open Source code runs adds value as well … (or both).



Sunday, September 7, 2014

About a VMware OpenSack vs. RedHat OpenStack comparison


I watched one of the sessions from last VMworld about the VMware Integrated OpenStack solution. I found it very interesting, and there were a couple of slides that really caught my attention. The messaging goes that OpenStack sits atop of a compute, storage and network set of resources, but does not impose what those resources are made off. This is very true, and one of the nice things of OpenStack.

In the case of VMware, they presented how they can fill all of those resources on an OpenStack solution: compute with vSphere, storage with VSAN and network with NSX. Further to that they said vCAC and vCOPS are complementary and you could use them as well (making you wonder why on earth would you be running OpenStack if you have all of those products … but that is another story). Following that, there were a couple of slides presenting a comparison done between an OpenStack solution built by using VMware products, with another one built using Red Hat. The slide quoted a document by Principled Technologies as supporting material. That document can be found here:

http://blogs.vmware.com/virtualreality/2014/06/vmware-bests-red-hat-openstack-performance-cost-study-2.html
http://www.vmware.com/files/pdf/techpaper/OpenStack_VSAN_0614.pdf

I find very interesting that VMware singled out an OpenStack vendor and chose Red Hat. It is worth reading the document and reflecting about it.

In principle the goal of the testing was to compare VMware vs. RedHat for running an OpenStack cloud giving considerations to the hypervisor and storage parts of the solution (NSX and Neutron were left out of the consideration).

The title of the document is "Cost and Performance Comparison for OpenStack Compute and Storage Infrastructure". The testing is done by using common tools to measure storage performance and by running a Cassandra DB on VMs provisioned via OpenStack and measuring its performance as well using standard performance testing tools.

The conclusions are very neatly articulated in the document introduction and can be summarised as: VMware solution is more performing (159% more IOPS) and less expensive (26% lower cost over three years).

The first point isn't shocking (although I was surprised by the incredible performance advantage, given that I have seen studies showing KVM outperforming ESXi for other DB workloads). But the second point was certainly a surprise.

But as with all studies, what matters is how they reach to conclusions and what are the items that lead to the differences. Let's look at them.

The performance difference can be explained very easily by noticing a few things:

  • the performance was measured only for the storage part of the solution. Not for memory-bound workloads, not for cpu-bound workloads, not for network I/O bound workloads.
  • the tests ran to measure storage performance were biased towards read (70/30 read to write in all cases). This may be realistic or not, depending on workload, but probably reasonable. 
  • the VMware solution (using VSAN) leverages SDD for caching, the RedHat solution (using RedHat Storage Server) does not.


There you go, the difference in performance is primarily justified because of the use of SSD for caching inside VSAN. If you were to use an SSD-based storage solution the performance difference would completely different, probably negligible and not necessarily to VMware's advantage.

In defence of those conducting the testing, RedHat currently does not offer a scale-out storage solution which can use SSD for caching only. You can use GlusterFS with SSDs, but it will be very expensive.

However if VSAN would be removed from the equation and both solutions compared using common storage from say NetApp (using SSD for caching) you probably get equivalent performance to the VSAN scenario. Arguably that would be a more open solution, because unlike VSAN, NetApp storage wouldn't be limited to working with vSphere only.

The price difference is coming from:

  • using dedicated servers for running RedHat Enterprise Storage: effectively more than doubles the cost of hardware.
  • the cost of the Red Hat Storage Server (I am unfamiliar with how this is licensed, so I can't comment and I take it as accurate of course).
  • the cost of using a full blown RHEL for running KVM.

Before going forward, I would like to quote something from the test document, and refer the reader to consider the title of the test document itself:

"While Red Hat does provide Red Hat Enterprise Linux OpenStack Platform, we left OpenStack support out of these calculations as stated above because each OpenStack environment and support engagement is so variable"

This test was commissioned to compare OpenStack solutions, but OpenStack solution pricing was not considered … Confusing, isn't it?

In the test, they chose to use RHEL Server to run KVM. Given they are using RHEL strictly for running KVM, they should have chosen for RHEV instead, which is also supported on the Dell PowerEdge servers they had at hand. This matters because it is more lightweight, optimised for running KVM with greater VM density and … less expensive:

- Red Hat Enterprise Virtualization, Premium (24x7x365 support):
4 (socket pairs) x $1,499 = $5,996.
source: https://www.redhat.com/en/files/resources/en-rhev-vs-vmware-vsphere-competitive-pricing-11717847.pdf

This already reduces the cost difference a bit … but actually, to make this an apples to apples comparison in the context of running OpenStack, they could have included Ent+ for vSphere, because on RHEV you can actually use Neutron to implement distributed switches and/or to leverage plugins from SDN vendors, and you can't do that on vSphere ENT (which uses standard vSwitch only, not very cloud friendly I believe). Therefore, the pricing for VMware software should also increase in consequence.

Just adding those two licensing considerations, which lower the price of the RedHat solution and increases the VMware one by quite a bit, the price of the two solutions would be almost equivalent in practice. Then again, if you consider using a storage solution from a vendor like NetApp or EMC instead of the vendor's scale-out option, you can build a RedHat based solution with equivalent performance and lower cost.

It then becomes also an issue of considering a converged solution or a separate external shared storage. A VSAN approach would have a density advantage (uses less rack space), and perhaps would be easier to manage too. There also an important element to consider and is operational cost, retraining of staff, etc. An external storage based solution offers greater flexibility, because it can be shared for things other than vSphere. Also, given that storage needs grow faster than compute needs in most environments, an external storage may be cheaper to run in the long term, but this is dependent on each environment.

Net net, just like with all vendor-comissioned TCO studies, I recommend people to actually read the studies (as opposed to just retaining the conclusions) and reflect about them Then customise the study methodology for their own environments. Such studies are usually a source of valuable information and setting up a framework for a valid comparison, but you can never assume that they compare apples to apples.


Sunday, April 13, 2014

Software Overlays: infinite scaling and pay as you grow cost model


Defendants of software overlays are often times upset when they are challenged on scalability compared to hardware based solutions. An example here: Scalability with NSX.

Honestly, I too think that the argument of software (running on x86) vs. ASICs isn't the most interesting one. However, I also think that to dismiss any argument by saying "you don't understand the architecture of software overlays" is a void reasoning. And yet it is what usually happens, and folk (like in the example above) resort to saying <my-overlay-of-preference>  is a distributed architecture and therefore, it scales.

The flaw in that reasoning is that you are considering ONE dimension of the problem: moving packets from A to B. But there are many other dimensions when it comes to any network system. There's building and maintaining network state, there's availability, there's managing latency and jitter, and many others.

I do not want to trash software overlay solutions that run on x86, by the way. I think the services they provide are better off delivered by the physical network which must exist anyway, but when it is not possible and you are in an environment with 100% virtualisation … software overlays are definitely to be considered. Even then, handling north-south traffic in and out of the overlay is a challenge that must not be overlooked. In the long run, an integrated solution will prevail in my opinion.

The key is "100%" virtualisation. Because when that is not possible, when there's going to be east-west traffic towards databases or other systems running bare metal (and many systems run and will run bare metal and/or not run on a hypervisor) overlays not only fall short, but also become increasingly expensive. Of course, when your business relies 80% on selling hypervisor licenses, your view of the world is somewhat different …

What software overlays don't really eliminate is upfront capital cost of building a network infrastructure. This is a fact.

They also do not fully provide a pay as you grow model. If you want to build an infrastructure with 100 physical hosts, you need at least 200 physical network ports (assuming redundant connections, not counting management, etc …). When you want to add another 100 physical hosts, you need another 200 physical network ports and to grow your network core/spine/whatever to accommodate for it. This is true whether you will run a software overlay using VXLAN or use plain VLANs or anything else (by the way, VLANs are still more than sufficient for many cases, and easily automated through any modern CMP, including OpenStack Neutron).

Adding or removing VMs to those 100 physical hosts is another story. If you choose to go for an overlay software model to provide connectivity on top of the physical network, and you choose to pay per VM instead of otherwise, … well that is a choice. Customers should do a TCO analysis and choose whatever they find most convenient, including support for multiple hypervisors, etc.

What you cannot do is to think that any vendor is providing you an infinite scale system (or near infinite scale system).

What you should not do either is to evaluate scalability across a single (simplified) dimension. No overlay system is fully distributed. Packet forwarding (a.k.a. data plane) may be distributed to the hypervisor, but control is centralised. Sure, vendors will say "control clusters are built on scale-out model" … but if that is the holy grail, ask yourself why you can't scale out as far as you want but instead you are limited to 5 servers in the cluster … maybe 7 … maybe … There must be some level of complexity when you can't just "throw out more servers and scale …".

Control and network state is more complex as you move up the stack. It is one thing for L2, another for L3, yet another when you add policy. There is no holy grail there … and if you believe you found it, you are wrong, you just haven't realised it yet.

There is only one real problem: scalability.

I urge any reader to think about that sentence, not just in the context of technology, but any other. Fighting world's poverty, for instance.

Sunday, April 6, 2014

IP Networks & Operating Systems - Virtualization is a means to an end, not the goal

I have recently seen a slide where it was stated that decoupling networking hardware and software is a must for achieving network virtualisation. This would also enable hardware independence, and provide the missing tool for the holy grail: the software defined data center.

I suppose that the best thought leadership is all about making people think that the (potential) means to an end, are really the objective itself. For instance, when someone says "I need a coke", when in fact they are just thirsty, and  they may like a coke to alleviate their thirst. Similarly, while the goal is IT automation, a possible means to that end is a software defined data center, and implementing a software defined data center by using virtualization of everything-on-x86 is only an option for that in itself. In this sense, many are evangelising that you need virtualisation, and because you need virtualisation you need network virtualisation and therefore you need to be able to run virtual networks the way you run virtual servers. 

I don't argue against the benefits of server virtualisation, or the need for network virtualisation either for that matter, in certain environments. I just think it is interesting how many of the marketing messages are creating this perception that virtualisation is a goal in itself. Much in the same way that SDN messaging has been distorted in the last few years, where it no longer is about separating control and data plane and opening both of them, but rather about running both in software (even if tightly integrated) … But that is topic for another post.

Why Server Virtualization was a Necessity 

I believe server virtualisation solved a problem that had been created by poorly designed operating systems and applications, which could not fully leverage the compute capacity they had available. The x86 architecture was also not good for providing isolation to higher layers. In the end, you had a physical server running one OS with a App stack on top which was not capable of making use of the full compute capacity. Servers were under utilised for that reason.Therefore hypervisors solved a deficiency of operating systems and applications. Applications were, for the most, incapable of using multi-core capabilities and operating systems were unable to provide proper isolation between running applications. This is what the Hypervisor solved. And it was probably a good solution (maybe the best solution at the time), because clearly re-writing applications is much harder to do than instantiating many copies of the same app and load balancing them … However, had the OS provided the proper containment and isolation and the CPU provided performing support for that, a hypervisor would have been less required. Because in that case, even if you would not rewrite applications for better performance, you could still run multiple instances. In other words, if we would have had Zones on Linux 8 years ago, the IT world would have been perhaps somewhat different today. (although in fact, we had them … perhaps in the wrong hands though).
Anyways, it is clear that for instance today, running Apps on LXC is certainly more efficient than doing it in a hypervisor from a performance standpoint. It will be interesting to see how that evolves going forward. 

We may need network virtualisation, but we do not need a network hypervisor

Similarly, an IP Network does not natively accommodate for proper isolation and for multiple users with different connectivity and security requirements to share the network. IP networks are not natively multi-tenant, or having the ability to segregate traffic for various tenants or applications. They were conceived to be the opposite really.There are solutions like using MPLS VPN or plain VRFs: in a nutshell, you virtualise the network to provide such functions. You do that at the device level, and you can scale it at the network level (again, MPLS VPN being an example of that, although it only uses IP as control plane, it uses MPLS at the data plane). VPLS is another example, albeit for delivering ethernet-like services.

Arguably, MPLS VPNs and/or VPLS are not the right solution for providing network isolation and multi-tenancy in a high density data center environment. So there are alternatives to achieving this using various overlay technologies. Some are looking to do this with a so-called network hypervisor, essentially running every network function on x86 as an overlay.For those supporting this approach, anything that is "hardware" bound is wrong. Some people would say that VPLS, MPLS VPN, VRF, etc. are hardware solutions and what we need are software solutions. 

I believe this is not true.  A VRF on a network router or switch involves software, which will program the underneath hardware to implement different forwarding tables for a particular routing domain and set of interfaces. A virtual router running as a VM and connecting logical switches is pretty much the same thing, except that its forwarding table is going to be implemented by an x86 processor.I do not like this partial and simplistic vision of hardware vs. software solutions. There are only hardware+software solutions. The difference is whether you use hardware specialised for networking or hardware for general computing. The first is of course significantly more performing (by orders of magnitude), whilst the second provides greater flexibility. The other aspect is provisioning and configuration. Some would argue that if you run network virtualisation in software (again, meaning on x86 on top of a hypervisor) it is easier to configure/provision. But this is a matter of implementation only. 

Conceptually, there is no reason why provisioning network virtualisation on specialised hardware would be any harder than doing it on general compute hardware.

You will always need a physical network … make the best out of it 

Because you always need to have a physical network in a data center, it is also evident that if the network infrastructure provides the right isolation and multi-tenancy with a simplified provisioning and operation, it represents a more efficient way of achieving the goal of automating IT than duplicating an overlay on top of a physical infrastructure (much like LXC are more efficient than a hypervisor). This leads to the title of the post. 

The goal is not to do virtualisation. Virtualisation is not a goal. The goal is not to do things in software vs. hardware either. 


The goal is enable dynamic connectivity & policy for applications that run the business supported by an IT organisation. And to do so fast, and in an automated way, in order to reduce risk of human errors. Whether you do it on specialised sophisticated hardware, or on general compute x86 processors is a matter of implementation, with merits and de-merits on both approaches. Efficiency is usually achieved when software sits as close to specialised hardware as possible.


Friday, January 31, 2014

On networking and political systems

I don't really know why I decided to write tonight. There is plenty of better stuff to do than this. Maybe it is because I am in a bad mood because I got my bicycle stolen today (and it's the second in 12 months …). Or maybe because I haven't posted anything in a long time. Whatever the reason, and for whoever cares to spend time reading, here it goes …

A couple of weeks ago I read the blog post on "Democratising Capacity (or how to interpret Cisco math)", by JR Rivers CEO of Cumulus Networks (and also an ex-Cisco employee). I recommend reading it. 


It provides a critic on a claim that Cisco made about being capable to provide more affordable network solutions than bare-metal offerings (here). His analysis concludes that this isn't true, and criticises the lack of transparency on pricing of Cisco, and the closeness of the system (since you have to buy the optics and even the cables from Cisco - something which isn't accurate, by the way). 

I know that the current buzz in the industry is all about "open" and "dis-aggregated", and is all cool and well. And that all that is evil in networking industry is to blame on Cisco (as if they had been the only vendor in the industry for the last 20 years or more).  I also know that white-box switches with Linux-based operating systems have been selling for years before (without being cool, and without being very successful until now). The code quality and feature set of those white-boxes switches isn't particularly great. 


Cumulus Linux adds a change to that. I think ONIE is interesting, and bare-metal switches supporting it, together with Cumulus Linux will one day make a more competitive offering than white-box switches have been in the past, if Cumulus Networks proves to be capable of delivering a good and rich operating system and quality support. 

And this is a good thing. Competition is good. Period.

But there are a few things on that post that I think are questionable, and I will humbly question them.

Let's begin:  "The attributes of transparency, choice, and degrees of freedom, not price, are driving all of the mega-scale customers to bare-metal networking solutions"

Of course I can't speak for what the mega-scale DC operators are doing and why, but what I have been reading on this topic and from the coverage of the OCP Summit this week, everything points to price as the main motivator for all of those that make the move (example on Facebook just this week). 


I also don't understand the critic on lack of transparency and choice. Certainly on freedom of choice, IT organisations can choose from many hardware vendors. Now, and last year and for the last 20 years (and more).  Granted, they can't decide to source the hardware from HP and run NX-OS on it, or to buy from Cisco and run JunOS, or whatever. Nor I think many customers would want that either. In any case, choices, … there's always been plenty. 

On transparency ... I think here the critic is mainly about pricing transparency. The Cisco list price is clearly available (and JR Rivers has access to it, as he quotes it). Cisco standard discounts and partner programs are available as well. Are there special discounts for large volume deals? Sure. Like in every industry. Every vendor does it. 

In fact, Cumulus Networks does it too. JR is guilty of what he criticises (if I got the critic right): the list price for a certain Cumulus Linux SKU is $699/switch/year, but thanks to an annual cap on volume can be as low as $150/switch/year. Is this valid for every customer? what is the level of volume required to get to this figure? Is it 10,000 licenses per year? what is the number I pay if I say 7,000 licenses per year?… It depends, I am sure. It is a negotiation.
How is this different from Cisco, or any other vendor, giving an special discount to a customer on a volume order? 

Further, the  list price he quotes for the AS5610-52X from Edge-core Networks is actually lower than the retail price you can find on line.  The actual model on Edge-core Website is the AS5600-52X (which is offered as a white-box or in bare-metal option supporting cumulus). I googled "AS5600-52X price" and I got here:

  • $5,999 (http://unixsurplus.com/product/accton-data-center-switch-as5600-52x-48-port-x10ge-4x-40ge-tor-spine-switch-l3 )
  • $5,999 (http://www.ebay.com/itm/Accton-Data-Center-Switch-AS5600-52X-48-port-x10GE-4x-40GE-TOR-Spine-Switch-L3-/141140747532)
There are more links but they either show higher pricing (with slightly different SKUs) or show no pricing.  I did not do an exhaustive search. Of course, JR cannot answer for the price transparency of another vendor (and he does say he encourages them to be transparent).

Besides, this is anecdotic. I am sure that JR is well aware of what is the actual list price, and I am sure he is bloody right. What I simply try to illustrate is that pricing transparency … is missing on the bare-metal model as of yet, but not so on established vendors. 



Freedom of choice. Well. Freedom to choose … Cumulus Linux. Because at the moment, if you are a customer that wants to buy the AS5610-52X from Edge-core in bare metal option, you can only run Cumulus Linux on it. Again, maybe in the future you will also be able to run another OS. Not today.

There is also another angle. And this is the most important oneWe are not comparing apples to apples. And this is key. I could mention silly minor details like the fact that the Nexus 3000 from Cisco has double the DRAM and a more powerful CPU than the AS5600-52X. Or that it is also more energy efficient (even if it has double the DRAM capacity and more powerful CPU, consumes less power). But it is at the OS level that there are more differences. 

JR says a "normal" customer needs to buy the Enterprise LAN software license (N3K-LAN1K9). I don't know what "normal" means in this context. Most customers use this product for L2 server access, therefore not requiring that license. Some customers implement L3 networks end to end, again not requiring that license since on the base license OSPFv2 is included (for up to 256 routes). You need the license if you use a larger OSPF routing table (hardly required in an access situation), or if you need BGP or VRF.

However there are many things that you get on the Cisco base license that you do NOT get yet on Cumulus Linux. To begin with, vPC, to enable servers to use LACP channels to redundant switches for instance. Or L3 multicast routing (on the Cumulus Linux data sheet there is no mention of PIM support, since it is not part of Quagga I guess it is just not there at all). I could make this list very long, but suffice with that. JR comparison should start removing those $8,000 from the Cisco count ...

It is also not apples to apples to compare the support from Cumulus Networks and the support from Cisco. I think this is so obvious today that requires no further explanation. I can tell Nike that I will do publicity for way less than Ussain Bolt does, but I don't believe they will think that I can run nearly as fast as he does.

And … there is one final detail that is not clear to me yet. JR mentions the  AS5610-52X List is $4,200. The Cumulus Linux License & Support is $150 (yearly).  But … what about the HARDWARE support. I mean hardware replacement. The Cisco Smartnet also includes advanced hardware replacement, with Next Business Day Delivery (which Cisco honours on a global basis). 

Does this mean that the $150/switch/year on that calculation include advanced hardware replacement for RMAing a defective unit of the Edge-core switch on a global basis? … Important detail.

Finally ... price isn't really the point that matters most, for most people. It is about value. In my experience, (most) IT organisations care that their network works well. Therefore, what they need is objective information and facts to make an educated decision about what is best for them in this sense. 

Religion and political ideals about democratising I-don't-know-what belong somewhere else, not in technical-decision making.