The NWA Area MicroStrategy user group is kicking off 2016 with a new meetup group. Their first meeting is scheduled for Thursday, Jan 18, 2016 at 6 PM. The location is TBD and will be announced shortly. If you haven’t joined or RSVP’d yet head over to Meetup.com and RSVP now!
Per the MicroStrategy website:
- General workflow and stability enhancements
- Enterprise Manager is now independent from Operations Manager and HealthAgent
- Dynamic dates in Desktop/VI
- Quickly import custom visualizations created using the MicroStrategy Developer Library (MSDL) in Desktop
- Mobile iOS and Android usability improvements
- Customizable MicroStrategy Web start page
- Automatic partitioning option for Intelligent Cubes
- Ability to update a cube in memory without having to import all the data again from the data source
- New Image Layout visualization
- Enhancements for Map visualizations
- Ability to apply a formatting theme to an entire Report Services Document or to a specific object
I must say I have been quite impressed with the volume of enhancements that have been coming from MicroStrategy in the last year. Maybe because the minor release are all but gone? What happened to 10.1.2? It makes upgrades a little more challenging since there is so much more to test each release cycle, but they are definitely pushing out more software changes each release than they have in years.
On the downside, it would seem support has fallen off a bit. I have cases that are open for months at a time, simply because it takes weeks in between responses. Perhaps something had to give.
I had a potential customer ask me yesterday if we could build mobile apps. Apparently they had just spent a lot of money on an app that would allow them to collect data in the field, but it did not integrate with any of their shipment data or POS sales data to complete the picture for their field personnel. This company also did not have the resources to build any kind of custom app from scratch themselves. So, if you are thinking about taking this feat on, let me break it down for you in a few high level steps.
There are many scenarios that you can walk down, but I am going to walk down two specifically: Custom app vs. MicroStrategy mobile app.
Scenario 1 – Building a custom app from scratch.
First off, you are going to need a good overview on iOS development. Try here for starters. When you go down this road, you are going to need a Mac to do your development on. You will also need a developer account with Apple to be able to publish the app. If you want to be backwards compatible, you may need more than one Mac to test on, as the XCode environment is tied to the OS (from what I can tell). You also will need an iPad or two, or three for testing. If you want to support iPhones, you will need some of those. What about the version of iOS? We are currently on 9.x. Do you want to support 8.x as well? While there are testers for some of this in xCode, if you want to make sure your app works across all of these environments I think it is a good idea to develop a test plan on actual HW so that your app isn’t flakey.
On your app side, you are going to be writing a lot of Objective C code to run the app, but you are also going to need a service in the background to dish out data and be the backend for the app. I doubt you would want the app to connect directly to your database. This service should also handle secure logins, passwords, user management, resetting a user password – all of the plumbing that will enable a user to mange the app, their account, and themselves. It also needs to grab data from the data warehouse and package it back to the app. You might need to compress it to make the app more faster.
Now, once you’ve climbed through all of that, you get to manage change management coming from user feature requests, from corporate, from bugs. You get to roll out new versions, craft a test plan to make sure it all remains backwards compatible with older versions of iOS, across all apple devices. To keep up, you may have to juggle a roadmap with multiple versions in play at various lifecycle stages – or in other words, you may be performing user acceptance testing on version 2.5 while you are working on publishing version 2.4 to the app store, as well as scoping changes to version 3.0 to be released next quarter.
I would not say any of this is rocket science, but it can grow to be quite an undertaking if you want to do it right. Wait – where is your Andriod app at? Corporate CFO has an android and wants his version for his phone. Where do you start for that? Now, remember that app that you thrashed in the comments because it was so buggy last week? Feeling even the least bit sorry for that company if this is one or two people trying to keep up with all of this?
Scenario 2 – MicroStrategy Mobile
Now that your head is spinning from trying to develop and support a custom app, there is a bright side to all of this – MicroStrategy Mobile. There are lots of other platforms, and this article could go on for days, but we have direct experience in MicroStrategy Mobile so we will give a glimpse of this one to compare and contrast.
First off – you will need a MicroStrategy environment. This of course is not free – you will need to get an enterprise license for this and each user will need a license. Second – you will need to develop your data objects. This also is not for the faint of heart. Most companies do all of this because they want slick reports, dashboards, and gorgeous data visualizations., regardless of mobile or not. This is pretty much MicroStrategy’s bread and butter. It handles all of the service back end, scheduling, report automation, security, throttling, and presentation. You just need to get your data into a data warehouse. There are lots of strategies for BI – but, if you go down the MicroStrategy route, then you inherit a Mobile strategy second to none.
All of the reports you built for your monday morning dashboard can translate directly into a mobile app with just a small amount of effort. There is no source code you need to master. MicroStrategy can handle much of the iOS compatibility and hardware testing. It’s almost like a buy one, get one free. You get enterprise class reporting along with enterprise class mobile.
MicroStrategy also has transaction services, which allows you to input data on the iPad. Need to capture store shelf quantity, or survey questions? No problem. It can capture data alongside all of your enterprise data warehouse metrics for a complete, 360 degree dashboard. It can show images, take pictures, capture data, report data, drill into your data, visualize your data in graphs and charts. You can build an entire customer service app – just in MicroStrategy – with your company icon and logo.
Now, if you just needed a mobile app, is this the easier route? Depends on how you look at it. There is probably equal amount of effort getting both scenarios up to speed. I won’t lie and say that MicroStrategy is easy. The payoff comes downstream when you need to support your app. If someone requests changes to your app, you can make a change to your MicroStrategy dashboard inside of MicroStrategy – without needing to recompile, test, and publish your app to the Apple app store. This change, depending on the significance, could literally take you 2 minutes to log in and change something minor. Want to roll out a version of this app for a new customer? Copy, paste, and change the logo – again, maybe a 10 minute change. Because of the object oriented development nature of MicroStrategy, each dashboard will inherit all of the building blocks in the foundation you build. So if you formatted a date wrong, you go change the date attribute. All of your reports, dashboards, and mobile apps then inherit the change – no need to touch them.
Hours or days – not weeks or months. No objective C code to maintain. No API service backend to maintain.
80% of what you build in MicroStrategy is reusable. This is not the case with Tableau, Qlikview, SSRS, Crystal Reports, or custom ASP.NET portals. This is why we lead with a MicroStrategy solution. If we build a customer a neat dashboard to be consumed in a web browser, and the CFO determines they want it on their iPad, we just have to copy, paste, and then do a little resizing so it fits nicely and viola – instant mobile app. Maybe less than a day’s work. If you are building a custom app from scratch – where is your git repository hosted at again?
If your organization could benefit from a BI platform to deliver reporting,dashboards, and data discovery – and also needs a mobile app strategy – then this seems like a no brainer to me. Even if you think it might be useful down the road, then having a combined strategy for BI and mobile makes sense. If you go down the road of separate BI and mobile, then you are eventually going to have to join them up, and it will be twice the support at that point. Twice the cost and twice the fun.
Please contact us today to see how we can help you with your mobile app and BI challenges.
Consumer Goods Technology came out with a whitepaper today on Bridging the Data Divide. In this whitepaper is a quote from Gordon Wade, senior vice president of category management best practices at the Category Management Association (CMA):
“Every category manager, whether at a retailer or a manufacturer, has more data than anyone could possibly review, much less analyze and understand,”
The paper also goes on to discuss some very neat things people are doing with mobile and shopping data to simulate store sales and people movements through aisle changes and shopper personalization. This might be the beginning of the tidal wave that is coming. My question to you is – what is your data strategy? Are you feeling like you are already drowning in data? The goal is not to drown retailers or suppliers, but to find ways to integrate data that will keep your business floating in the short and long term future.
It is clear that the ability to combine new data sources in innovative, cohesive ways will be integral to grocery and CPG success. We can help with that. Store sales and inventory, weather, demographics, store traits, social media, supply chain – we combine all of this into powerful, user driven analytics in a drag-n-drop, build your own reporting and dashboard environment. We are also willing to take in your specific data sources to give you a complete picture of your business, and apply statistical models for predictive metrics to your data for even more insights.
Please contact us today to see how we can help you out of your data deluge.
One of the most common uses of machine learning in analytics is to forecast time based data. It’s the quintessential sales question – what will my sales look like next month, or next quarter, or next year even – the proverbial crystal ball, if only it were that simple. Something that we were fairly quickly put together using MicroStrategy’s visual insights and R-Integration is an “Ordinary Least Squares” regression algorithm to fit the best curve that captures the general trend and seasonal variability of Walmart POS data to predict future sales.
The formula is:
Y = bTrend*Trend + Σ (bSeason_i*Seasoni) + bIntercept
- Y is a numeric metric (called the Dependent Variable)
- Trend is a numeric metric that’s an arithmetic sequence of monotonically increasing values
- Seasoni is a binary indicator metric derived from Season, a numeric or string metric that represents each season. Binary indicators have a value of 1 for the i-th season and are 0 for all other seasons. For n seasons, there are n-1 XSeason_i variables
- bTrend, bSeason_i, and bIntercept are coefficients determined by the regression algorithm.
As sales drop in for the coming months, we should be able to gauge the accuracy of our prediction for the rest of the year. If this hold true, we could use it for some of our business decisions going forward. We could also look at just the latest complete months, so we would not see that monthly drop in month 201402. We could also look at this weekly by switching out just a couple of metrics.
Something else we could do is create a variance against actual POS sales, and if the variance exceeds some number, like 10% difference plus or minus, we could create an alert and send out warning emails to key people in our business so that they can plan for unanticipated high sales, or research a drop in sales.
Please contact us to see how we can help you leverage regression analysis with your data to help predict your future!
The business intelligence landscape is rapidly changing, and there is a lot of confusion on what the difference between BI, Analytics, Big Data, and Data Mining is. Whats more, you turn your head for just a minute and then there are whole new classes of terms that you’ve never even heard of before.
In the below article from Dennis Junk at Aptera’s blog, he breaks it down into four main categories to help you understand. As a CPG company supporting Walmart, I believe it is important to have a strategy for all of these concepts – especially in the wake of their new supplier terms and squeezing they are employing. If you don’t, we can help you with that, as we are a full service Business Intelligence company that can deliver Analytics from Big Data sources and use the R statistical package to mine the data for meaningful insights.
This is the broadest category and encompasses the other three terms here (at least as they’re used in a business IT context). BI is data-driven decision-making. It includes the generation, aggregation, analysis, and visualization of data to inform and facilitate business management and strategizing. All the other terms refer to some aspect of how information is gathered or crunched, while BI goes beyond the data to include what business leaders actually do with the insights they glean from it. BI therefore is not strictly technological; it involves the processes and procedures that support data collection, sharing, and reporting, all in the service of making better decisions. One of the trends in recent years has been away from systems that rely on IT staff to provide reports and graphs for decision-makers toward what’s called self-service BI—tools that allow business users to generate their own reports and visualizations to share with colleagues and help everyone choose what course to take.
This is all the ways you can break down the data, assess trends over time, and compare one sector or measurement to another. It can also include the various ways the data is visualized to make the trends and relationships intuitive at a glance. If BI is about making decisions, analytics is about asking questions: How did sales for the new model compare to sales for the old one last month? How did one salesperson do compared to another? Are certain products selling better in certain locations? You can even ask questions about the future with systems that perform Predictive Analytics. Some companies treat analytics and BI as synonymous—or simply rely on one to the exclusion of the other. But analytics is generally the data crunching, question-answering phase leading up to the decision-making phase in the overall Business Intelligence process.
This is the technology that stores and processes data from sources both internal and external to your company. Big Data usually refers to the immense volumes of data available online and in the cloud, which requires ever more computing power to gather and analyze. Because the sources are so diverse, the data is often completely raw and unstructured. Since you’ll probably be using this data for purposes it wasn’t originally intended to serve, you’ll have to clean it up a bit before you can garner any useful insights from it. The systems you put in place internally to track KPIs are obviously the main source you turn to when you need to answer a question about your business, but Big Data makes available almost limitless amounts of information you can sift through for insights related to your industry, your business, your prospective customers. Big Data is the library you visit when the information to answer your questions isn’t readily at hand. And like a real library it allows you to look for answers to questions you didn’t even know you had.
Finding answers you didn’t know you were looking for beforehand is what Data Mining is all about. With so much information available, you can never be sure you’re not overlooking some key fact pointing the way to better performance. Data Mining is the practice of sifting through all the evidence in search of previously unrecognized patterns. Some companies are even hiring Data Scientists, experts in statistics and computer science who know all the tricks for finding the signals hidden in the noise. Data Mining probably fits within the category of analytics, but most analytics is based on data from systems set up to track known KPIs—so it’s usually more measuring than mining.
Not everyone will agree on these terms, as Dennis points out in his article, but it’s a good start. As a core strategy I believe your BI should encompass all three: easy to use analytics that allows your users to ask their own questions, big data to capture MORE than just sales data, and data mining so that you can leverage all of your data for the best insights possible.
Please contact us to see how we can help you create a strategy in all of these areas that might unlock a competitive advantage you didn’t know existed!
Why would you want to use cluster analysis on your retail sales data? Well, cluster analysis helps you identify non-independence in your data. Here is an example to help illustrate the point. Lets say we want to ask loads of teachers from many different schools what they think of their principal. If you ask two different teachers from two different schools, you will get two completely different answers that will be independent. But, if you ask two teachers from the same school, the answers will not be completely independent and could be very similar – but not EXACTLY the same. Now if your job was to take the raw data and try to predict which school each teacher came from based on their answer – then you have an application of clustering.
The same thing can be applied to Walmart store performance for a supplier. You have some data points for a store like how long that store has been open, how many competitors it has located in its vicinity, what was your products sales performance for that store, some demographics for that area like unemployment and population, possibly even some historic weather data. Now you use a clustering algorithm to group your stores that are most closely related. This could be the first step in identifying under performing stores and why. It could give you a viable store list for a product test based on more than sales performance. It might help you further identify your product identity and who your actual customers are using enough demographic data. You might not find anything you didn’t already know. The important thing is that you are diving into your data to truly understand it on a level you never have before, and uncovering one of these nuggets could be millions of dollars difference to your company.
Once you’ve built your base analysis, and in our case we built our report that you see above, turned it into an in-memory cube, and then built a MicroStrategy dashboard on top of it – we can then explore slicing and dicing our data along the different data points to help identify if any of the metrics in our analysis are a key contributor to a cluster alignment. This way we can determine what factor affects sales the most. Could it be store age? or store square footage? or unemployment? Ethnic breakdown? What of these are driving markdowns?
The great thing about using this analysis as a MicroStrategy dashboard is that it is pretty easy to tweak to look for your top performing stores, and refreshing the data source is very easy. In fact, this report could be automated each week and emailed to you. There might even be an application to look for cluster changes and have something like that generate an alert so you only need to be bothered if anything changes.
Contact us today to discover how Vortisieze analytics can help you explore your own data science.
No. 18 Arkansas will take on Toledo, a team picked to win its conference, on Saturday in Little Rock. The game kicks off at 3 p.m. on SEC Network Alternate
Game 2: Toledo| Saturday, Sept. 12 | War Memorial Stadium
Kickoff: 3 p.m. CT
TV: SEC Network Alternate
This is an interesting article on how Mobile Location Analytics (aka Beacon Technology) is helping brick-and-mortar retailers compete effectively with online retailers by capturing customer behavior near and in the store.
You can read the article below, however, the question for you is how will you incorporate this new data source once it is provided to you by the retailer?
Will your rigid, difficult to modify, DSR incorporate this data stream in a timely manner – or the usual months or years that data model changes sometimes take in a data warehouse environment.
Contact us today to discover how Vortisieze analytics can rapidly adapt to new, sometimes ad hoc (think your latest spreadsheet creation), data sources.
This article is so important we are reprinting it in its entirety. As always, the link to the source is below.
Please contact us to see how predictive analytics can give you the competitive advantage over your brand’s competitors.
Ideally, a retailer’s customer data reflects the company’s success in reaching and nurturing its customers. Retailers built reports summarizing customer behavior using metrics such as conversion rate, average order value, recency of purchase and total amount spent in recent transactions. These measurements provided general insight into the behavioral tendencies of customers.
However, reports summarizing average behavior don’t provide the useful insights needed to determine how individual customers are likely to behave because general behavior tendencies are simply too broad. In order for retailers to create a meaningful dialogue with customers that honors the shopper’s preferred level and mode of engagement, it takes more than summarized reports, which is why customer intelligence and predictive analytics provide the opportunity to significantly change the retail marketing industry.
Customer intelligence is the practice of determining and delivering data-driven insights into past and predicted future customer behavior. To be effective, customer intelligence must combine raw transactional and behavioral data to generate derived measures. The process can best be described using the saying, “It’s not the data that is collected, it’s the data that is created.” Put into a predictive modeler’s perspective, the team not only collects a large amount of data, but also contextualizes that data by building derived attributes that provide additional insight into customer intent.
But how do data scientists and predictive modelers determine which derived attributes are relevant? Usually data scientists lack the deep domain expertise needed to clarify and prioritize their efforts. Therefore, a collaboration with domain experts is essential. This collaboration is like a three-legged stool. Each leg is critical to the stool remaining stable and fulfilling its intended purpose. When it comes to generating customer intelligence, the three legs of the stool are retail experts, data geeks and coders, and predictive modelers or data scientists.
Retail experts have domain expertise and can best frame the problem customer intelligence is aiming to solve. They suggest derived attributes that will provide value to both the brand and the company’s marketing campaign. Data geeks are needed to program these ideas and store them in a suitable database, which can often lead to greatly increased data storage requirements for the retailer. However, if the data can only be used to create solutions or make key marketing decisions if it’s properly stored and accessed. Inaccessible data means useless data and a wasted opportunity.
Predictive modelers and data scientists are then needed to use the stored data to build models that achieve those business objectives originally set by the retail expert. Predictive models find relationships between historic data and subsequent outcomes so that near-term and long-term customer behavior can be predicted. This leg of the stool aims to answer problems such as the likelihood of when a shopper will make their next purchase and what the value of that purchase will be. Sometimes, these relationships are so complex that only machine learning techniques will find them.
In a real world example, consider a retailer that would like to appropriately message high-valued, loyal shoppers who appear to be disengaging from the brand. A predictive model built from stored data could identify which shoppers are likely to purchase again with seven days, allowing the retailer to let them be the loyal customers they truly are. The predictive model can also show if certain shoppers are unlikely to purchase within seven days but have a high average order value. For these shoppers, the retailer could provide an incentive to bring the shoppers back to the brand. In either case, predicting what shoppers are likely to do is critical to understanding how best to complete the dialogue with them.
Moving forward, retailers will need to big data augment marketing decisions using insights gained from customer intelligence and predictive analytics. Each retailer’s data team must bring in elements from all aspects of the business, including retail experts, data geeks and predictive modelers. These key elements will set retailers up for success as we move forward into the era of big data.