Influencer Marketing for Dummies

What the hell is this new buzzword Influencer Marketing and who is that Influencer here? Haven’t we had enough self-labeled titles? Heck I even call myself a “datacenter specialist” on my LinkedIN profile. What’s a specialist anyway? While I do agree that self-labelling is a weird thing, it’s worth looking at the bigger picture.


What’s influence

Allow me to show you my light on the situation. It all depends on what you define as influence. Some might argue that the so called influencers of today don’t really have an influence at all. I would disagree and I speak of my own experience here as a pre-sales working for a VAR previously.

When I was looking for new technology or double checking on new products to go sell to a customer I’d search for all available information online. And I reckon we all agree that most customers tend to do the same thing these days. Whether the writing is from a professional journalist or a small blogger that writes his experiences, every piece of information would somehow have had a minimal impact on my buying/selling decisions. So whether or not someone has a big following or attracts 1000’s of people for a keynote doesn’t really matter that much when I am searching for the information in my buying process.

The conversation

Not that long ago I was talking to a startup and to truly understand what they do I asked a bunch of questions. And every answer of-course led to a new question. There were a lot of my questions though that I had to ask twice because:

These are questions we don’t get from our customers.

And that struck me. They were solely focussing on the conversation with the customer. They had no idea that I – and I’ll call myself the influencer here – needed a lot more information than their customer in order to truly understand their market value against their competitors of which I did have all the information. In order to be able to be the influencer and write about it or talk to their customers on their behalf.

Let me visualise all these conversations and you’ll probably get my point why this all makes sense.

What this company didn’t truly understand was that without ‘other’ people leading the conversation, the customer base would only grow by scaling the company resources. Some companies call this organic growth, I call it a missed opportunity.

An influencer is anyone that talks about your company or products and is not on your payroll. Making sure they have ALL information, on time and in their preferred format, is the key task of an Influencer Marketer. After all, that’s what they need to be able to influence someone else.

Am I calling the Marketing / Community / PR / AR / Sales & Channel teams dead? ABSOLUTELY NOT! All of these contacts have their own way of being approached and managed and there are specific skills to do that the right way. Does this make the role of an Influencer Marketer redundant? Only if you like working in protected silo’s.

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

New Awesomeness Feature on my list – from Meraki

I sat through quite an amount of vendor pitches. From time to time it’s the small things that differentiate the company and show you their true culture. I keep a list of these small features that have enjoyed me over all these years and I’m ready to share some of them with you.

Make-A-Wish button

The last feature that has been added to my list is the Merake Make-A-Wish button. I first saw it on their presentation for TechFieldDay at Cisco Live in Milan. It’s basically a button at the bottom right corner of EVERY SINGLE PAGE of the User Interface (UI) where you can request for a feature or UI change. It is:

  • NOT a form
  • NOT a phone call
  • NOT an E-mail
  • Just a button that opens a text box!
1430743786_full.png

Continue reading New Awesomeness Feature on my list – from Meraki

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

All-Flash is changing your hardware support

A couple of weeks back at Episode 29 of the In Tech We Trust podcast we talked about the the failure rate of flash drives. Apparently, when handled by smart controllers, it is far less likely than we would think. Some DMs with Vaughn Stewart (Chief Evangelist at PureStorage) later we came to the following statement:

The failure rate of flash drives at PureStorage, 2.5 years after GA, is less than 10 in 1000’s of deployed drives.

This is a truly impressive number. It also helps understand why SolidFire announced “an unlimited drive wear guarantee valid for all systems under current support”. (source: The Register)

The Rebuild

Having a failed disk is not necessarily an issue. We have failover mechanisms for that. The problem is the consequenses of rebuild time. First of all there is the risk of double failure since we put extra stress on the disks for rebuilding parity. That’s why we have created double parity solutions (RAID6). Secondly there is a significant performance drop since both the controllers and the disks are ‘busy’ working on that rebuild. This resulted in up to 24/48hr of keeping your fingers crossed.

Does this still apply for all-flash systems? I mean, isn’t a flash drive exponentially faster than a hard disk? Let’s put it to the test, shall we?

Continue reading All-Flash is changing your hardware support

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Cisco NBase-T – lipstick on your Cat5e pig

In January of this year I was invited to join the TechFieldDay crew at Cisco Live in Milan. On Monday we got a whole day of Cisco presentations and the rest of the week we had time to spend on the show floor (and a trip to Lake Como!).

The BYOD pipe is just too small

The last presentation for the day was for Peter Jones, principal engineer at Cisco and chairman at the NBase-T alliance. He came to present us MultiGigabit Ethernet (2.5Gbps / 5.0Gbps). My immediate reaction was: who the hell needs a 2.5 and 5.0 GbE standard if we already have 10GbE being rolled out and 40GbE or even 100GbE up and coming?

While 10/40/100GbE are indeed being deployed, this is mainly in the datacenter itself. Interconnects between server clusters or frontned/backend of storage arrays/clusters. But what if we want more than 1GbE to the end-points?

How many of you come into the office and still plug their computer into an RJ45 cable? Most of you will just open their laptop (and tablet and smartphone) and connect straigth to the wireless network, be it the corporate secure Wifi for your laptop or the guest network for your BYO-device. With all these devices connected, our APs can’t keep up with ‘merely’ 1GbE. So we have to scale the bandwidth.

Continue reading Cisco NBase-T – lipstick on your Cat5e pig

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Intel & VMware bring HyperSocket Infrastructure

History

Virtualization has come a long way. In 1987 RAID (redundant Array of Independent Disks) was the first introduction to obfuscating what was really happening on a lower level to one level above. Dozens of layers of obfuscation have been added over the last 30 years that it is today wildly know as virtualization. It wasn’t until VMware came to market with their server obfuscation that this methodology would be adopted by many competitiors.

Intel

The direct result of virtualization was that by adding more layers of obfuscation we were able to increase speeds and lower the overall latency. Lately there has been a great discussion whether or not the obfuscation should happen on top or within the kernel of the virtualization operating system.

VMware and Intel, whom have always had a great partnership, will now come forward with the next step; HyperSocket Infrastructure. The basic concept is that HyperSocket Infrastructure will run completely in the CPU, without the need of an operating system kernel. Today for a process to be completed it traverses the Hypervisor Kernel and the Hardware at least 7 times. The speed increase by doing everything within the processor will be exponential.

VMware has already succesfully reduced their kernel footprint when they changed from ESX to ESXi. This time the software will be even skimmed down so far that it will only use processor instruction sets instead of compiled program languages. The name HyperSocket was chosen to identify that to be able to run this new architecture one will need at least 4 sockets or more (hyper = 4 dimensions in geometry).

Future

It’s not yet known whether this will be in the next vSphere release (v10) or already in the next intermediate update. [note: VSAN was introduced in vSphere 5.5 so HyperSocket Infrastructure could be introduced in v6.6]. 

During the press briefing we already had a view on the future roadmap of Intel and VMware where they showed that HyperSocket Infrastructure is just the first step to a new era of Application Virtualization where the need of DataCenters will completely be eliminated and everything will virtualized within the processors of client devices such as self-driving cars and smartwatches.

Read more in the joint press release here.

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Veeam updates – sales reference

Lately I have been asked by resellers in the field in Belgium to help position Veeam at larger accounts. Most of those customers have tested Veeam at one point, either in a full PoC (Proof of Concept) or just in a lab. Some of them got the value immediately, others were not ready to jump but feel themselves forced today to review that statement because their legacy system is still lacking a modern approach to virtualization protection. One of the questions I get at that point is: what has Veeam done since version X.x ?

Reverse Roadmap

instead of making bold statements and hollow promisses like many do, Veeam has had this brilliant concept, and please don’t trademark this 😉 , of showing the reversed roadmap. I wish more vendors did exactly this.

I have made my slightly more extended version of it with the intermediate patches, release dates and links to the full release notes and some meta information. Feel free to share with your prospects!

Continue reading Veeam updates – sales reference

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Production storage needs new benchmarks

Dragster Benchmarking

I’ve ranted on this more than once. Benchmarks are 99% of the time utter bull… and tell you nothing about what the solution’s real possibilities are, let alone what they mean to your environment. The dragster benchmarks (i.e. SPC-1) are just a show-off competition with little to no value to you. Allow me to bring up a couple of points why;

  • Generally speaking the dragster benchmark is based on 100% 4k-reads. Let me assure you that there is not a single system out there – certainly not yours – that does 100% 4k reads, let alone 4k in general. It’s when these blocks are bigger (16k, 64k and towards 128k/265k) when things get interesting and these machines will show different proportions towards those 4k reads.
  • Nowadays you’ll see that vendors will add ‘random’ reads, where it historically always was ‘sequential’ reads. This is mostly seen when the working set will come from flash where random and sequental hardly matter.
  • Writes will also show huge differences in the proportions towards those 4k reads as we now have a full IO-ack path to follow. In a 2 controller system you’ll have to take note of the system’s controller caching (& CPU) capabilities where in a scale-out architecture you’ll see penalties from traversing the network once or multiple times.

Bring in the data reduction

Some vendors are even excluded from performing [publishing] the standard benchmarks in the likes of SPC-1 because these benchmarks don’t allow datareduction to take place. A lot of new technologies have, because of the power of flash, put datareduction at the very foundation of their architecture. I’m thinking of SimpliVity for example with their “the best IO is the one you never have to write” and in the case of this post PureStorage.

I was quite pleased to read PureStorage’s blogpost last week where they prepared a version of the vdbench performance tool for you with a lenghtly post about the merits/necessities of a [new] tool like this. It is definitely worth a read! Of course this whole thing is self-serving for them but it certainly can’t do harm in moving the needle towards a more transparent and honest marketing.

image courtesy of PureStorage

My Take

I praise where possible, heckle where necessary. I will time and time again ridicule PureStorage when they go after EMC with another cheap-shot campaign (like others do as well where I heckle a well), but this is already the second time I have to praise them for doing what I feel is the right thing to help answer the real question the customer is asking; what is your solution going to do in my environment. And this, my friends, is what most vendors keep ignoring time and time again. So please, if you do yet another benchmark, tell them what that means … for ME.

Disclaimer: I am by no means compensated by PureStorage for this post. That being said both PureStorage and SimpliVity have used my consulting services in the past.

Be Social and Share: facebooktwittergoogle_plusredditpinterestlinkedintumblrmailfacebooktwittergoogle_plusredditpinterestlinkedintumblrmail
%d bloggers like this: