From what I have seen 6-pager process and more importantly the clear writing and efficient meetings that it fosters are pretty helpful. It's important to note that a 6-pager contains 6 pages of expected reading and can have as much additional appendix material as you want. I've seen "6-pager" documents that were dozens of pages long, and have heard possibly apocryphal stories of 6-pagers hundreds of pages long for major projects. If the question was specifically about AI being a required component, idk. But a VP at Amazon presides over a part of the business so large that it is implausible that there aren't useful applications for AI in their domain.
The ideal of the 6-pager is magnificent. The reality of the 6-pager in 2019 is that it has become boilerplate where style overwhelms substance. Similar things can be said of their OP1 and OP3 planning processes that are becoming increasingly cargo cult IMO. And that arises naturally from a static game whose most experienced players have become better at funding projects than they are at funding better projects.
Though very late to the party (modulo Alexa), Amazon has eagerly embraced AI and machine learning since 2015, but it lacks the leadership for formulating clear targets (see AutoML and AlphaGo Zero as examples of clear targets IMO).
A recent article in _The Information_ made the claim that their entire AI organization is making <$20M annually.
That is consistent with what I saw there: an effectively infinite number of codemonkeys throwing unprocessed data at randomly chosen AI algorithms downloaded off of github and hoping for the best. It's an interesting experiment, but it seems to be about as efficient as a monkey throwing darts to pick stocks (which is surprisingly better than many biased investment advisors admittedly). Time will tell, no?
In my case, the project we wrote an OP1 for was judged to be too technically complex such that anyone capable of executing on it would be a "flight risk" capable of commanding higher pay elsewhere.
Therefore they decided to destaff the effort and wait for me to prove them right rather than give me a couple engineers and enough rope to hang myself (which I wouldn't have most likely given my track record that established me as a "flight risk" apparently).
That just pushes the problem one level down on the stack, no? Or who sends back the work of the 6-pager Czar? IMO they need to refresh the process, and refresh it frequently. I don't see them doing that though.
Similar things have happened with the well-intentioned "bar-raiser" process as well IMO. I had bar-raisers in interview loops for my team who seemed more intent in blocking potential competitors to their niche than in hiring the best people, which was the whole point of said "bar-raising."
As an Amazonian far flung from AWS, (I am in ground level fulfillment) I have seen some incredible internal AZ programs that utilize ML in unexpected areas to save the company lots of money. So I'm going with yes.
I have never worked for Amazon so I'm just speculating but if you have a culture of being data driven, specifically in the ways you optimize things then I'm guessing you'll probably be pretty successful and it's not a giant leap to plugging that data in to some ML models or something. To plug that stuff in the some ML models requires an entire culture of data collection and some thought as to how you'd improve or benefit from it; it's sort of like the "no brown M&Ms" rider that Van Hallen used to have.
The article overstates the use of ML at least in fulfillment we use a lot of math models (eg xpress solver) and optimization algorithms (traveling salesmen type problems) but not so much in the ML space.
Usually optimization takes forecasts as part of inputs. It is actually a lot like reinforcement learning, where the sales forecasts are part of the environment. This is standard process for supply-chain management, not just retail industry or at Amazon.
Edit: the ai part specifically.