post

The Feynman Lectures on Physics, The Most Popular Physics Book Ever Written, Now Completely Online

Last fall, we let you know that Caltech and The Feynman Lectures Website joined forces to create an online edition of The Feynman Lectures on Physics. They started with Volume 1. And now they’ve followed up with Volume 2 and Volume 3, making the collection complete.

First presented in the early 1960s at Caltech by the Nobel Prize-winning physicist Richard Feynman, the lectures were eventually turned into a book by Feynman, Robert B. Leighton, and Matthew Sands. The text went on to become arguably the most popular physics book ever written, selling more than 1.5 million copies in English, and getting translated into a dozen languages.

The new online edition makes The Feynman Lectures on Physicsavailable in HTML5. The text “has been designed for ease of reading on devices of any size or shape,” and you can zoom into text, figures and equations without degradation. Dive right into the lectures here. And if you’d prefer to see Feynman (as opposed to read Feynman), we would encourage you to watch ‘The Character of Physical Law,’ Feynman’s  seven-part lecture series recorded at Cornell in 1964.

The Feynman Lectures on Physics is now listed in our collections of Free eBooks and Free Textbooks.

Article: http://www.openculture.com/2014/08/the-feynman-lectures-on-physics-the-most-popular-physics-book-ever-written-now-completely-online.html

post

Amazing 1960s Predictions About Satellites, Email, and the Internet

It’s hard for many of us living here in the early 21st century to imagine a world without satellites. Well, in fairness, we don’t really think about satellites at all. Much like electricity or tap water, we only remember how vital they are when they stop working. Our GPS devices, smartphones, and modern military infrastructure all depend on satellites.

But before they ruled our world, experts were predicting how they might radically alter the way we communicate. And as with many predictions that we look at here at Paleofuture, they got a lot right, just not in the form that was initially imagined.

The February 17, 1962 issue of the Sunday comic strip Our New Age (in this case, run on a Saturday in the Chicago Daily News) envisioned the fantastic advancements that the introduction of satellites would allow. Everything from the decline of “old fashioned mail” to the rise of video-conferencing from home was predicted by Athelstan Spilhaus, dean of the University of Minnesota’s Institute of Technology and author of the comic strip.

“Communication satellites will revolutionize life in the next few years by relaying more messages faster from anywhere — as cheaply as undersea cables,” the comic strip proclaimed. And Spilhaus was absolutely right. Undersea cables are, of course, still a part of our modern communications infrastructure. But tossing aphoto halfway around the world is a process that can now involve beaming data to the heavens.

The strip also promised that “old fashioned mail service” will only be for packages. Again, something that has indeed come to pass in many ways — and something that organizations like the United States Postal Service are struggling with.2

As for children learning from home and only going to school for play? Not so much. The classroom hasn’t disappeared quite yet, despite a century of promises about distance learning.

“Researchers, thousands of miles away, may consult books in the Library of Congress or the British Museum,” the strip promised. This being 1962 — seven years before the ARPANET would gasp its first breaths — the prediction was incredibly ahead of its time. In fact, ever since I first tweeted a photo of this image a few years ago, I’ve heard from at least half a dozen organizations (including the British Museum) asking if they could use the image.

“To keep abreast, many people will have to work in shifts around the clock — because midnight here is midday elsewhere,” the strip explained. Modern communications technologies have changed the way that people around the world can work. Living in one time zone and effectively working in another is no longer so strange thanks to the infrastructure that makes geography irrelevant for certain jobs.

The strip imagined that the videophone (a dedicated appliance, pictured below) would aid in the push to make geography irrelevant to the businessperson of tomorrow.

“Perhaps communication satellites will even solve the traffic problem!” the strip insisted rather optimistically.

But here in 2014, satellites aren’t just about talking to your boss over the videophone. Nor have they ever been. This being a Sunday comic strip, it’s obvious that the focus on civilian communications was more appropriate than mentioning one of the most important functions of satellites: Spying on enemies from above.

And today, America’s dependence on satellites and their vulnerability to terrorismare cause for concern. Now that we expect our satellites to work like tap water, what would happen if there was a major disruption in service?

“The United States has strategic interests in preventing and mitigating dangerous space incidents, given its high reliance on satellites for a variety of national security missions and unparalleled global security commitments and responsibilities,” one recent report from the Council on Foreign Relations warned.

Everything from the safety of our thousands of nukes to paying for a cup of coffee has the potential to be impacted if satellite communications were interrupted.

“Threats to U.S. satellites would reduce the country’s ability to attack suspected terrorists with precision-guided munitions and conduct imagery analysis of nuclear weapons programs, and could interrupt non-cash economic activity depending on the severity of the attack and number of satellites disrupted,” the recent report read.

The Sunday funnies never warned us that our reliance on satellites might become a crutch that, if kicked out from under us, creates mass chaos.

Article: http://paleofuture.gizmodo.com/amazing-1960s-predictions-about-satellites-email-and-1626476845

Why Are PC Sales Up And Tablet Sales Down?


Editor’s note: Peter Yared is the founder and CTO of Sapho and was formerly the CTO/CIO of CBS Interactive.

When iPads first came out, they were hailed as the undoing of the PC. Finally, a cheap and reliable computing device for the average user instead of the complicated, quirky PC. After a few years of strong growth for iOS and Android tablets and a corresponding decrease in PC sales, the inverse is suddenly true: PC sales are up and tablet sales are “crashing.” What happened?

The tablet slowdown shouldn’t be a surprise given that tablets have hardly improved beyond relatively superficial changes in size, screen resolution, and processor speed. The initial market for tablets is now saturated: grandparents and kids have them, people bought them as Sonos controllers and such, and numerous households have them around for reading. People that want tablets have them, and there’s just no need to upgrade because they more than adequately perform their assigned tasks.

Businesses and consumers alike are again purchasing PCs, and Mac sales are on the rise year-over-year. Businesses in particular are forced to upgrade older PCs now that Windows XP is no longer supported. When purchasing a new PC, the main driver to choose a PC versus a tablet is fairly obvious: If you are creating any type of content regularly, you need a keyboard, a larger screen, and (for most businesses) Microsoft Office.

Reigniting Tablet Growth with “Super Tablets”

For the tablet category to continue to grow, tablets need to move beyond what Chris Dixon calls the “toy phase” and become more like PCs. The features required for a tablet to evolve into a super tablet are straight from the PC playbook: at least a 13” screen, 64 bit processor, 2GB of RAM, 256GB drive, a real keyboard, an actual file system, and an improved operating system with windowing and true multitasking capability. Super tablets form factors could range from notebooks to all-in-one desktops like the iMac. Small 7” and 9” super tablets could dock into larger screens and keyboards.

The computer industry is littered with the detritus of failed attempts to simplify PCs ranging from Sun Micrososytems’ Sun Ray to Oracle’s Network Computer to Microsoft’s Windows CE. But this time, it’s actually different. The power of mass-produced, 64-bit ARM chips, economies of scale from smartphone and tablet production, and — most importantly — the vast ecosystem of iOS and Android apps have finally made such a “network computer” feasible.

Businesses Need Super Tablets

As the former CIO at CBS Interactive, I would have bought such super tablets in droves for our employees, the vast majority of whom primarily use only a web browser and Microsoft Office. There will of course always be power users such as developers and video editors that require a full-fledged PC. A souped-up tablet would indeed garner corporate sales, as Tim Cook would like for the iPad … but only at the expense of MacBooks.

The cost of managing PCs in an enterprise are enormous, with Gartner estimating that the total cost of ownership for a notebook computer can be as high as $9,000. PCs are expensive, prone to failure, easy to break and magnets for viruses and malware. After just a bit of use, many PCs are susceptible to constant freezes and crashes.

PCs are so prone to failure that ServiceNow — a company devoted to helping IT organizations track help desk tickets — is worth over $8 billion. Some organizations are so fed up with problematic PCs that they are using expensive and cumbersome desktop virtualization, where the PC environment is strongly controlled on servers and streamed to a client.

And while Macs are somewhat better than Windows, I suggest you stand next to any corporate help desk or the Apple genius bar and watch and learn if you think they are not problematic.

The main benefits of super tablets to enterprises are their systems management and replaceability. Smartphones and tablets are so simple and easy to manage that they are typically handled by an IT organization’s cost-effective phone team rather than more expensive PC technicians, who are typically so overwhelmed with small problems that they cannot focus on fixing more complex issues. Apps can be provisioned and updated by both IT and end-users without causing conflicts or problems. If a device is lost, it is easy to remote wipe data and to provision a new device with all of the same settings.

Programs like BYOD (Bring Your Own Device) just accentuate the fact that smartphones and tablets are so easy to manage that enterprises are comfortable letting their employees pick the devices themselves. Users also get great benefits, including instant-on, long battery life, simplicity, and access to legions of apps from the iTunes and Play app stores.

Article: http://techcrunch.com/2014/08/23/why-are-pcs-up-and-tablets-down/

post

In Silicon Valley, Mergers Must Meet the Toothbrush Test

MOUNTAIN VIEW, Calif. — When deciding whether Google should spend millions or even billions of dollars in acquiring a new company, its chief executive, Larry Page, asks whether the acquisition passes the toothbrush test: Is it something you will use once or twice a day, and does it make your life better?

The esoteric criterion shuns traditional measures of valuing a company like earnings, discounted cash flow or even sales. Instead, Mr. Page is looking for usefulness above profitability, and long-term potential over near-term financial gain.

Google’s toothbrush test highlights the increasing autonomy of Silicon Valley’s biggest corporate acquirers — and the marginalized role that investment banks are playing in the latest boom in technology deals.
Read More

For Big-Data Scientists, “Janitor Work” Is Key Hurdle to Insights

Technology revolutions come in measured, sometimes foot-dragging steps. The lab science and marketing enthusiasm tend to underestimate the bottlenecks to progress that must be overcome with hard work and practical engineering.

The field known as “big data” offers a contemporary case study. The catchphrase stands for the modern abundance of digital data from many sources — the web, sensors, smartphones and corporate databases — that can be mined with clever software for discoveries and insights. Its promise is smarter, data-driven decision-making in every field. That is why data scientist is the economy’s hot new job.

Yet far too much handcrafted work — what data scientists call “data wrangling,” “data munging” and “data janitor work” — is still required. Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets.

“Data wrangling is a huge — and surprisingly so — part of the job,” said Monica Rogati, vice president for data science at Jawbone, whose sensor-filled wristband and software track activity, sleep and food consumption, and suggest dietary and health tips based on the numbers. “It’s something that is not appreciated by data civilians. At times, it feels like everything we do.”

Several start-ups are trying to break through these big data bottlenecks by developing software to automate the gathering, cleaning and organizing of disparate data, which is plentiful but messy. The modern Wild West of data needs to be tamed somewhat so it can be recognized and exploited by a computer program.

“It’s an absolute myth that you can send an algorithm over raw data and have insights pop up,” said Jeffrey Heer, a professor of computer science at the University of Washington and a co-founder of Trifacta, a start-up based in San Francisco.

Timothy Weaver, the chief information officer of Del Monte Foods, calls the predicament of data wrangling big data’s “iceberg” issue, meaning attention is focused on the result that is seen rather than all the unseen toil beneath. But it is a problem born of opportunity. Increasingly, there are many more sources of data to tap that can deliver clues about a company’s business, Mr. Weaver said.

In the food industry, he explained, the data available today could include production volumes, location data on shipments, weather reports, retailers’ daily sales and social network comments, parsed for signals of shifts in sentiment and demand.

The result, Mr. Weaver said, is being able to see each stage of a business in greater detail than in the past, to tailor product plans and trim inventory. “The more visibility you have, the more intelligent decisions you can make,” he said.

But if the value comes from combining different data sets, so does the headache. Data from sensors, documents, the web and conventional databases all come in different formats. Before a software algorithm can go looking for answers, the data must be cleaned up and converted into a unified form that the algorithm can understand.

Continue reading the main story
Data formats are one challenge, but so is the ambiguity of human language. Iodine, a new health start-up, gives consumers information on drug side effects and interactions. Its lists, graphics and text descriptions are the result of combining the data from clinical research, government reports and online surveys of people’s experience with specific drugs.

But the Food and Drug Administration, National Institutes of Health and pharmaceutical companies often apply slightly different terms to describe the same side effect. For example, “drowsiness,” “somnolence” and “sleepiness” are all used. A human would know they mean the same thing, but a software algorithm has to be programmed to make that interpretation. That kind of painstaking work must be repeated, time and again, on data projects.

Data experts try to automate as many steps in the process as possible. “But practically, because of the diversity of data, you spend a lot of your time being a data janitor, before you can get to the cool, sexy things that got you into the field in the first place,” said Matt Mohebbi, a data scientist and co-founder of Iodine.

The big data challenge today fits a familiar pattern in computing. A new technology emerges and initially it is mastered by an elite few. But with time, ingenuity and investment, the tools get better, the economics improve, business practices adapt and the technology eventually gets diffused and democratized into the mainstream.

In software, for example, the early programmers were a priesthood who understood the inner workings of the machine. But the door to programming was steadily opened to more people over the years with higher-level languages from Fortran to Java, and even simpler tools like spreadsheets.

Spreadsheets made financial math and simple modeling accessible to millions of nonexperts in business. John Akred, chief technology officer at Silicon Valley Data Science, a consulting firm, sees something similar in the modern data world, as the software tools improve.

“We are witnessing the beginning of that revolution now, of making these data problems addressable by a far larger audience,” Mr. Akred said.

ClearStory Data, a start-up in Palo Alto, Calif., makes software that recognizes many data sources, pulls them together and presents the results visually as charts, graphics or data-filled maps. Its goal is to reach a wider market of business users beyond data masters.

Six to eight data sources typically go into each visual presentation. One for a retailer might include scanned point-of-sale data, weather reports, web traffic, competitors’ pricing data, the number of visits to the merchant’s smartphone app and video tracking of parking lot traffic, said Sharmila Shahani-Mulligan, chief executive of ClearStory.

“You can’t do this manually,” Ms. Shahani-Mulligan said. “You’re never going to find enough data scientists and analysts.”

Trifacta makes a tool for data professionals. Its software employs machine-learning technology to find, present and suggest types of data that might be useful for a data scientist to see and explore, depending on the task at hand.

Continue reading the main storyContinue reading the main story
“We want to lift the burden from the user, reduce the time spent on data preparation and learn from the user,” said Joseph M. Hellerstein, chief strategy officer of Trifacta, who is also a computer science professor at the University of California, Berkeley.

Paxata, a start-up in Redwood City, Calif., is focused squarely on automating data preparation — finding, cleaning and blending data so that it is ready to be analyzed. The data refined by Paxata can then be fed into a variety of analysis or visualization software tools, chosen by the data scientist or business analyst, said Prakash Nanduri, chief executive of Paxata.

“We’re trying to liberate people from data-wrangling,” Mr. Nanduri said. “We want to free up their time and save them from going blind.”

Data scientists emphasize that there will always be some hands-on work in data preparation, and there should be. Data science, they say, is a step-by-step process of experimentation.

“You prepared your data for a certain purpose, but then you learn something new, and the purpose changes,” said Cathy O’Neil, a data scientist at the Columbia University Graduate School of Journalism, and co-author, with Rachel Schutt, of “Doing Data Science” (O’Reilly Media, 2013).

Plenty of progress is still to be made in easing the analysis of data. “We really need better tools so we can spend less time on data wrangling and get to the sexy stuff,” said Michael Cavaretta, a data scientist at Ford Motor, which has used big data analysis to trim inventory levels and guide changes in car design.

Mr. Cavaretta is familiar with the work of ClearStory, Trifacta, Paxata and other start-ups in the field. “I’d encourage these start-ups to keep at it,” he said. “It’s a good problem, and a big one.”

Navy Makes History With Integrated Unmanned-Manned Carrier Ops

The US Navy just announced that it has successfully integrated unmanned and manned carrier operations for the first time. This is huge, as it’s pretty much the first step in how the Navy will work not for the next few years, but probably for the next few decades.

This news is the result of the second phase of X-47B shipboard testing.


http://foxtrotalpha.jalopnik.com/navy-makes-history-with-integrated-unmanned-manned-carr-1622988833/+ballaban

Online Marketing For Startups

The Internet is a marketing resource that has unlimited potential. Currently, this platform and technique offers businesses the ability to create and maintain an online presence. Although startups are popping up more and more, sustaining a successful online business may require following specific guidelines.

There is a massive amount of information available on the Internet. The World Wide Web is a vast space that allows users to control, view, and access information they find interesting. For that reason, putting a specific product or service in front of a potential customer can be a difficult task. However, many businesses combat this issue by designing and implementing an effective online marketing strategy. Although many startups have a limited budget, there are several useful tips to develop a successful online marketing strategy inexpensively.


http://level343.com/2014/08/18/online-marketing-startups/

Sent from 

Medicine’s Big Problem with Big Data: Information Hoarding

Researchers at IBM, Berg Pharma, Memorial Sloan Kettering, UC Berkeley and other institutions are exploring how artificial intelligence and big data can be used to develop better treatments for diseases.

But one of the biggest challenges for making full use of these computational tools in medicine is that vast amounts of data have been locked away — or never digitized in the first place.

The results of earlier research efforts or the experiences of individual patients are often trapped in the archives of pharmaceutical companies or the paper filing cabinets of doctors’ offices.

Patient privacy issues, competitive interests and the sheer lack of electronic records have prevented information sharing that could potentially reveal broader patterns in what appeared to any single doctor like an isolated incident.

When you can analyze clinical trials, genomic data and electronic medical records for 100,000 patients, “you see patterns that you don’t notice in a couple,” said Michael Keiser, an instructor at the UC San Francisco School of Medicine.

Given that promise, a number of organizations are beginning to pull together medical data sources.

Late last year, the American Society of Clinical Oncology announced the initial development of CancerLinQ, a “rapid learning system” that allows researchers to enter, access and analyze anonymized medical records of cancer patients.

Similarly, in April the CEO Roundtable on Cancer, a nonprofit representing major pharmaceutical companies, announced the launch of Project Data Sphere. It’s an open platform populated with clinical datasets from earlier Phase III studies conducted by AstraZeneca, Bayer, Celgene, Memorial Sloan Kettering, Pfizer, Sanofi and others.

The data has been harmonized and scrubbed of patient identifying details, enabling independent researchers or those working for life sciences companies to use it freely. They have access to built-in analytical tools, or can plug the data into their own software.

Quoted:

Patient privacy is important but so is making progress on cancer.

David Patterson, a professor of computer science at UC Berkeley developing machine learning tools for cancer research

It might uncover little known drug candidates that showed some effectiveness against certain mutations, but were basically abandoned when they didn’t directly attack the principle target of a particular study, said Dr. Martin Murphy, chief executive of the CEO Roundtable on Cancer.

In some cases, it could also eliminate the need for control groups — those who receive the standard of care plus a placebo instead of the experimental treatment — since earlier studies have already indicated the outcomes for those patients. (That would be an important development because the fear of receiving a placebo is a major reason many patients decide againstparticipating in clinical trials.)

The effort is happening now in part because of improving technology and in part because companies are coming around to the view that they’ll all be better off with the insights gleaned from this pooled data.

“It’s a recognition that it’s costing a lot more money to develop another drug,” Murphy said. “The low-hanging fruit was long ago harvested.”

Other information sharing efforts include the Global Alliance for Genomics and Health, the molecular databases maintained by EMBL-EBI and the National Institute of Health’s Biomarker Consortium.

post

Study: How Corporations are Deploying in the Collaborative Economy

Can big brands learn from Uber, Kickstarter, Airbnb and the Maker Movement? Yes, they’re using the same strategies to connect to their market, at a rapid pace.

The Collaborative Economy is a movement. People are empowered to fund, build, their own bespoke goods in the Maker Movement, and people are using new technologies to share what they already have in the Sharing Economy. In both cases, people are empowered to get what they need from each other –rather than buy from traditional companies.

Business models are changing, but corporations aren’t standing by idle, they’re quickly adapting and changing,you may access this detailed timeline. While we’ve yet to see return on investment numbers from any of these early deployments, our research of this same sample set indicate that thefrequency of brands deploying is increasing, even before the launch of Crowd Companies, six months ago.

A bit about the data and methods: Anongoing list of efforts has been collected from industry leaders, readers, and the brands themselves, we then tagged each of the deployments into specific categories. Most case examples are tagged in more than one instance, as they have overlapping deployments. Data was collected up until April 2014, and even more case examples are emerging. We identified a corporation as a company with characteristics of companies usually over 1000 employees or over a billion dollars gross revenue.

Three Graphs: Corporations in the Collaborative Economy

  1. Industry breakdown
  2. Major strategy
  3. Specific tactic(s)


Graph 1, above: Across the nearly 77 case examples up till April 2014, a majority of the deployments were from the retail industry, followed by the auto industry, then the technology space, then hospitality. In some cases, we counted consumer product goods and durable goods which impact the retail space in the frequency count.

Key findings:  Companies closely related to consumer type business models are most impacted –and therefore have done the most deployments.



Graph 2, above: Across the broad spectrum of the collaborative economy (maker movement, crowd funding, sharing economy, and more) corporations are deploying tactics that are related to the sharing economy. The one isolated event of a co-op, where the company is owned by the customers and employees, is REI. The second most common strategy was tapping the Maker Movement, often in the form of co-innovation, also known as outside-in innovation or other variations.

Key findings: Corporations gravitated towards sharing economy business models, often through sponsorships, partnerships with leading players like Uber –but this doesn’t guarantee business model resiliency beyond the media pickup.

 



Graph 3, above: This graph is a subset of the above “Strategy” breakdown, shown directly above. We found that specific tactics most companies are deploying “brands as a service”, which means products are sent on-demand or are available as rental business models –instead of ownership models. In particular, BMW, Peugeot, Daimler, rent cars directly to drivers, and hotels like Westin and Cosmo rent out workout gear and dresses on Rent The Runway, respectively.

Key findings: Brands deployed “Brand as a service” which often equates to a rental model, or on-demand model, to meet new market demands of “access over ownership”. Secondly, much of this was achieved through partnerships with players like Uber, or other on-demand players.  A few companies launched their own marketplaces, or partnered with other companies that offered this.


Conclusion: Brands must adopt Peer to Peer Commerce Models
Large corporations continue to adopt disruptive technologies. Twenty years ago, they adopted the internet, ten years ago, they adopted social media, and now, in 2014, they’re adopting the methods of the Collaborative Economy. The internet phase required an online B2C model, social media shifted to peer to peer communication, yet in this next phase, brands must offer their own peer to peer commerce models. In each of these phases, mindset changes are required, letting go of some control in order to gain more, and business model shifts. To learn more, find my body of work on the collaborative economywhich includes research, frameworks, graphics, data, and case examples.

 

Read the complete article:

http://www.web-strategist.com/blog/2014/05/26/study-how-corporations-are-deploying-in-the-collaborative-economy/