Search all of New York City's affordable housing by name, owner, year built, location, financing or physical information (for example by # of building violations in 2010). Or, you can research all sorts of demographic information from Crime to Education to employment to health to all sorts of housing informtion, to property tax to population, ethnic demographics and transportation.
The Furman Center for Real Estate and Urban Policy collects a broad array of data on demographics, neighborhood conditions, transportation, housing stock and other aspects of the New York City real estate market. We make our data directly available to the public through our new Data Search Tool, and publish comprehensive analyses of these data in our periodic reports.
The Data Search Tool is a new online application that provides direct access to New York City data collected by the Furman Center. Users can select from a range of variables to create customized maps, download tables, and track trends over time. Users are able to overlay never-before available information on privately-owned, publicly -subsidized housing programs collected through the Furman Center’s Subsidized Housing Information Project (SHIP). Information about how to use the Data Search Tool is available in our online guide.
From the Furman Center
Google is opening wider a beta test of Dynamic Search Ads, an interesting new type of AdWords ad for larger advertisers that eliminates the need for keywords.
With this ad type, designed for retailers or other advertisers with large, often-changing inventory, Google automatically generates ad copy — based on the advertiser’s template — by looking at the content in the advertiser’s Web site. Google also automatically displays the ad in response to search terms it thinks are a good match, without the advertiser having to select keywords. Google has been using a similar no-keywords approach in its program for small local advertisers, AdWords Express.
For Dynamic Search Ads, advertisers input their Web site URL or the URL of a range of pages on their site — say, a retailer wanted to promote their women’s clothing — and select a bid price based on the value of that category to them. Google then continually crawls the Web site so it knows when inventory changes, and can theoretically respond with relevant ads more quickly than the marketing team that’s manually creating keywords and ads. The system is also designed to keep on top of changes in the types of queries people are performing — Google says 16% of searches every day are new.
In an effort to keep this from impinging on advertisers’ existing campaigns, the system will hold back the dynamically generated ad in favor of advertiser-created copy, if the advertiser already has a campaign targeting the specific search term.
“We want to make sure it doesn’t affect keyword campaigns,” Baris Gultekin, director of AdWords product management, told me. “This is purely incremental.”
Gultekin says the company will provide advertisers with reporting on search terms that generated clicks, the matched destination pages and ad headlines generated, average CPC, clicks and conversions. Advertisers may optimize by adjusting a max CPC bid.
The new ad type has been in development for two and a half years, and “a couple hundred” advertisers across a variety of verticals have already been testing it. Gultekin says advertisers on average are seeing 5-10% increase in conversions with a positive ROI.
One advertiser in particular — ApartmentHomeLiving.com, a real estate Web site with constantly changing inventory — says it saw a 50% increase in conversions at an average cost-per-conversion that’s 73% less than their normal search ads. The company is already a seasoned search marketer with campaigns of up to 15 million keywords.
Dynamic Search Ads are available in all languages and all countries currently, but only to advertisers in the limited beta. The company is soliciting inquiries from customers that might be interested in participating in the beta in order to widen its reach.
I am wondering whether this is a site glitch, or a major move to compete with Craigslist, but rental posting on Backpage appears to be free at the moment...
Rental posts typically cost a dollar on Backpage, either in apts by owner (no brokers), apts broker fee, or apts broker no fee. Yet today, everything in the rental section seems free.
This is a completely different model than Craigslist, that allows owners to post for free, and charges brokers $7-10 per post to brokers... (and enforces everything rather loosely and randomly)
Apartment Search from Walk Score from Walk Score on Vimeo.
Commuting is expensive and time spent sitting in traffic is lost forever. Here are our favorite commuting stats:
- Over three quarters of home shoppers rate being within a 30 minute commute to work as important. (Source: National Association of Realtors)
- Commuters waste 4.2 billion hours and 2.8 billion gallons of gas in traffic per year. (Source: Texas Transportation Institute)
- The average American spends over $9,000 per year on their car. This is the equivalent of a $135,000 mortgage and the second largest expense for most households, costing more than food, clothing and health care. (Source: AAA)
To get started, visit walkscore.com/apartments and enter your work (or school) address, select your preferred mode of transportation, and tell us how long you’re willing to commute.
Apartment listings from craigslist are automatically sorted by estimated commute time and can be further filtered by Walk Score, price and size.
And if you don’t find what you’re looking for, we’ve integrated links to MyNewPlace and ForRent.com to search their national databases for nearby rental listings.
“Access to public transit and minimizing commute times are high-priority, quality of life issues for many renters. We’re very pleased to offer Walk Score users access to MyNewPlace’s extensive inventory of apartments and rental homes in virtually every neighborhood throughout the U.S.,” said Mark Moran, MyNewPlace SVP of Marketing
From WalkScore Blog
My post today on the OMG Blog.
In his 2003 novel Pattern Recognition, William Gibson created a character named Cayce Pollard with an unusual psychosomatic affliction: She was allergic to brands. Even the logos on clothing were enough to make her skin crawl, but her worst reactions were triggered by the Michelin Tire mascot, Bibendum.
Although it’s mildly satirical, I can relate to this condition, since I have a similar visceral reaction to word clouds, especially those produced as data visualization for stories.
If you are fortunate enough to have no idea what a word cloud is, here is some background. A word cloud represents word usage in a document by resizing individual words in said document proportionally to how frequently they are used, and then jumbling them into some vaguely artistic arrangement. This technique first originated online in the 1990s as tag clouds (famously described as “the mullets of the Internet“), which were used to display the popularity of keywords in bookmarks.
More recently, a site named Wordle has made it radically simpler to generate such word clouds, ensuring their accelerated use as filler visualization, much to my personal pain.
So what’s so wrong with word clouds, anyway? To understand that, it helps to understand the principles we strive for in data journalism. At The New York Times, we strongly believe that visualization is reporting, with many of the same elements that would make a traditional story effective: a narrative that pares away extraneous information to find a story in the data; context to help the reader understand the basics of the subject; interviewing the data to find its flaws and be sure of our conclusions. Prettiness is a bonus; if it obliterates the ability to read the story of the visualization, it’s not worth adding some wild new visualization style or strange interface.
Of course, word clouds throw all these principles out the window. Here’s an example to illustrate. About six months ago, I had the privilege of giving a talk about how we visualized civilian deaths in the WikiLeaks War Logs at a meeting of the New York City Hacks/Hackers. I wanted my talk to be more than “look what I did!” but also to touch on some key principles of good data journalism. What better way to illustrate these principles than with a foil, a Goofus to my Gallant?
And I found one: the word cloud. Please compare these two visualizations — derived from the same data set — and the differences should be apparent:
- Mapping a Deadly Day in Baghdad from The New York Times
- word cloud of titles in the Iraq war logs from Fast Company
I’m sorry to harp on Fast Company in particular here, since I’ve seen this pattern across many news organizations: reporters sidestepping their limited knowledge of the subject material by peering for patterns in a word cloud — like reading tea leaves at the bottom of a cup. What you’re left with is a shoddy visualization that fails all the principles I hold dear.
Every time I see a word cloud presented as insight, I die a little inside.
For starters, word clouds support only the crudest sorts of textual analysis, much like figuring out a protein by getting a count only of its amino acids. This can be wildly misleading; I created a word cloud of Tea Party feelings about Obama, and the two largest words were implausibly “like” and “policy,” mainly because the importuned word “don’t” was automatically excluded. (Fair enough: Such stopwords would otherwise dominate the word clouds.) A phrase or thematic analysis would reach more accurate conclusions. When looking at the word cloud of the War Logs, does the equal sizing of the words “car” and “blast” indicate a large number of reports about car bombs or just many reports about cars or explosions? How do I compare the relative frequency of lesser-used words? Also, doesn’t focusing on the occurrence of specific words instead of concepts or themes miss the fact that different reports about truck bombs might be use the words “truck,” “vehicle,” or even “bongo” (since the Kia Bongo is very popular in Iraq)?
Of course, the biggest problem with word clouds is that they are often applied to situations where textual analysis is not appropriate. One could argue that word clouds make sense when the point is to specifically analyze word usage (though I’d still suggest alternatives), but it’s ludicrous to make sense of a complex topic like the Iraq War by looking only at the words used to describe the events. Don’t confuse signifiers with what they signify.
And what about the readers? Word clouds leave them to figure out the context of the data by themselves. How is the reader to know from this word cloud that LN is a “Local National” or COP is “Combat Outpost” (and not a police officer)? Most interesting data requires some form of translation or explanation to bring the reader quickly up to speed, word clouds provide nothing in that regard.
Visualization is reporting, with many of the same elements that would make a traditional story effective.
Furthermore, where is the narrative? For our visualization, we chose to focus on one narrative out of the many within the Iraq War Logs, and we displayed the data to make that clear. Word clouds, on the other hand, require the reader to squint at them like stereograms until a narrative pops into place. In this case, you can figure out that the Iraq occupation involved a lot of IEDs and explosions. Which is likely news to nobody.
As an example of how this might lead the reader astray, we initially thought we saw surprising and dramatic rise in sectarian violence after the Surge, because of the word “sect” was appearing in many more reports. We soon figured out that what we were seeing had less to do with violence levels and more to do with bureaucracy: the adoption of new Army requirements requiring the reporting of the sect of detainees. Of course, the horrific violence we visualized in Baghdad was sectarian, but this was not something indicated in the text of the reports at the time. If we had visualized the violence in Baghdad as a series of word clouds for each year, we might have thought that the violence was not sectarian at all.
In conclusion: Every time I see a word cloud presented as insight, I die a little inside. Hopefully, by now, you can understand why. But if you are still sadistically inclined enough to make a word cloud of this piece, don’t worry. I’ve got you covered.
This is an insightful and rather shrewd criticism of word clouds, and I think it applies to much of the infographic, data-visualizaion obsessed tech culture we live in.
I find myself fascinated by many of the new and innovative ways to graphically represent data. Yet, as Jacob Harris points out, many of these sleek new techniques (if they don't miss the point entirely) strip supposedly core ideas from the very context that lend them meaning... and we are left with a aesthetically pleasing series of pretty graphs and pie charts that convey very little actual information (see my post on the Infographic Idiom).
And even though CNN, Fox and other news networks are now embracing new visualization tools, tag clouds are ultimately useless measures of political sentiment, because concepts themselves really cannot be reduced to their most elemental articulation; in a word.