One line there caught my eye. "The execs told Sun staff that the machine learning and artificial intelligence the company is emphasizing will serve as a way to free up time so staffers can create more content without getting bogged down in other, more tedious processes."
Back in the mid 70's when I was new at the ANPA/RI (the research arm of what is now the NAA), we identified one of those "more tedious processes", display ad dummying. It was obvious that before newspaper pages could be paginated (a critical goal then), the page geometries would need to be computed and passed to a pagination engine. Remember, this was a time when there was no QuarkXPress, InDesign or even desktop computers.
After prototyping a laser page imager for the ANPA/RI's patented page compositor invention, I thought ad dummying was ripe for automation. Some work on automated dummying had been done by my predecessor, Dave Reed. The ANPA had also funded similar work that was done at MIT. Why not pick up from there and make a practical solution?
So here we are in 2016 and the struggle for automated dummying of newspapers still isn't resolved. Why?
Michael Ferro talks of artificial intelligence and machine learning. Been there. Done that. (Well, maybe not for the audience analysis as Mr. Ferro envisions.) Still, with thousands of publications being dummied each week with Layout-8000™ (including all of Mr. Ferro's papers), why do only about one in five Layout-8000 users let the computer do most of the work?
Based on my 40 years of experience with this issue may I humbly suggest that perhaps technology adoption isn't as easy as Mr. Ferro might think.
It's a topic of endless fascination to me. I learn something new every day. At many sites dummying an edition with Layout-8000 takes about 10 minutes. Much of that time is spent just double checking and tweaking what Layout-8000 proposes. At other sites operators spend as much as four times that long. What's the difference? Does Layout-8000 need to be even smarter? Isn't a huge state space search coupled with a finite state machine with 133 transitions enough?
Is it about better GOFAI (Good Old Fashioned Artificial Intelligence)? It's something I studied. My early 1970's research papers on chess programming appeared in SIGArt. In 1980 we at the ANPA/RI installed the first production version of Layout-80 at the Pittsburgh Post-Gazette. It could and did auto-dummy.
So, Mr. Ferro, is it just a little AI and ML that will reduce the tedium of ad dummying at the tronc papers?
There is good reason to doubt it. Before I tell you what I've concluded is the overarching impediment to progress, let' consider what typically happens. Layout operators know stuff that isn't told to Layout-8000 (or, for that matter, recorded anywhere.) Consider the interface problem: 1) advertiser requests aren't entered into front ends, or, if they are, they aren't passed to Layout-8000; (Would you believe that some front-end vendors claiming a Layout-8000 interface actually sabotage their interface to Layout so as to favor their own dummying products?) 2) Many former MEI ALS sites have a very hard-to-break culture of manually placing every ad; 3) quick and dirty manual placement takes precedence over doing a one-time, more complete product description and setup, and so it goes.
As we help corporate newspaper groups like Gannett, tronc (Tribune), Lee Enterprises and others consolidate dummying into design centers, we have found the search for more automation continues. Recently we added what we call LayoutHistoryAdBoss™. This is a module that records the decisions of Layout designers in context. It learns their dummying expertise. (Tasks, not people, are moved to design centers. What those people know needs to be moved to the design centers.)
So we do artificial intelligence and machine learning. Behind the scenes we gather statistics on every Layout-8000 dummying session. We know how long operators take, the description of the product, the completeness of the ad interface data, the product setup, etc.
And still this isn't enough. What's really missing?
Mr. Ferro, I've come to believe progress depends most on agility. Being able to quickly make and deploy small improvements to systems is absolutely critical.
So this is an argument for less of the bureaucracy we see at corporate sites. Through speeding change one maximizes improvements. Consider a corporate testing cycle that lasts longer than the interval between releases. Of what use is that? Does it reduce problems with upgrades? Unfortunately the contrary is true.
I'm reminded of a story my daughter, Sharon, told me of her programming job at Dell. She was responsible for a supply chain application. A new release was deployed and folks in Ireland and Italy started having trouble accessing the system. Turns out that O'Leary, O'Mally, D'Orazio, had problems as did others with names with single quotes. They were messed up by an error in coding an MS SQL query statement. The fix took five minutes to code. The approvals to deploy the fix took an entire day of hand carrying paperwork for signatures by managers who were unlikely to understand any technical explanation of the problem or its fix.
We have many similar stories we could tell about the organizations we serve.
The lesson - more bureaucracy degrades rather than improves the yield from new computing technology.
It is wrong thinking to believe that a longer deployment time frame avoids problems. It just increases the difficulty of finding and repairing them.
My recommendations: 1) be more agile, and 2) recognize that the issue isn't one of technology but of management. For the tedium to go away, what people do with technology must change.