Sunday, August 04, 2013

SQL Server 2014 Standard Edition Sucks, and It’s All Your Fault [feedly]

I miss SQL Server for its tooling and because I've been using it since SQL Server 7(?). it's just too expensive unless you are in the enterprise and now there are many good open source and cloud options not to mention NoSQL.
 
 
Shared via feedly // published on Brent Ozar Unlimited // visit site
SQL Server 2014 Standard Edition Sucks, and It's All Your Fault

Every release lately, Microsoft has been turning the screws on Standard Edition users. We get less CPU power, less memory, and few (if any) new features.

According to Microsoft, if you want to use more than $500 worth of memory in your server, you have to step up to Enterprise Edition. Seriously? Standard Edition licensing costs about $2,000 per CPU core, but it can only access 64GB of memory? That's ridiculous.

Take just a quick glance at the SQL Server 2014 edition feature grid and you might be shocked at what Standard Edition doesn't allow:

  • Database snapshots (a huge lifesaver when doing deployments)
  • Online reindexing, parallel index operations (wouldn't you like to use more than one core?)
  • Transparent database encryption (because only enterprises store personally identifiable data or sell stuff online, right?)
  • Auditing (guess only enterprises need compliance)
  • Tons of BI features (because hey, your small business doesn't have intelligence)
  • Any non-deprecated high availability feature (no AlwaysOn Availability Groups – you get database mirroring, but that's marked for death)

Every now and then, I hear managers and DBAs react with shock about how limited Standard is, and how much Enterprise Edition costs – $7,000 per CPU core.

Sometimes they even say, "That's ludicrous! If I was Microsoft, there's no way I would do it that way. And we've got really savvy developers – I bet we could even write a database engine that could do most of what we need."

Okay, big shot. Time to put your money where your mouth is.

The world is full of open source databases that are really good. You're not the only ones frustrated with what Microsoft's done to SQL Server licensing, and there's vibrant developer communities hard at work building and improving database servers.

What's that, you say? You're too busy? You'd rather keep paying support on your current SQL Server, and keep working on incremental performance improvements to your code and indexes?

Yep, that's what I thought.

Microsoft won't change its tack on SQL Server licensing until you start leaving. Therefore, I need you to stop using SQL Server so they'll start making it better. You know, for me.

Update July 30th:: there's a good discussion at HackerNews about this post, and I've been participating. DBAs – if you want to stay current on what startup developers think about databases for their new projects, HackerNews is a good reality check. It's a completely different perspective than the typical enterprise developer echo chamber.

...
Thousands of people can't be wrong - subscribe to our YouTube channel.
(Well, except for those people who use Microsoft Access as a database. Those people are all entirely wrong.)


AWS Performance Tip 3: Use Amazon ElastiCache to improve application performance [feedly]

Amazon's ElastiCache is being used sparingly for now but will become a key component soon.
 
 
Shared via feedly // published on Cloud, Big Data and Mobile // visit site
AWS Performance Tip 3: Use Amazon ElastiCache to improve application performance


Web Applications can be often made to perform better and run faster by caching critical pieces of data in memory. Frequently accessed data, layers of HTML fragments, results of time-consuming/expensive database queries, search results, sessions, results of complex calculations and processes are usually very good candidates for cache storage. In general application architectures that are read intensive will usually have better performance gains using cache tiers. In web scale applications, distributed caching has become a mandatory part of the architecture stack to improve performance.
AWS introduced ElastiCache for this purpose. ElastiCache is fully compliant with MemCached and applications already using MemCached can easily migrate their code to Amazon ElastiCache. ElastiCache serves data from RAM, and execute all the simple operations (such as SET and GET) with O(1) complexity. Amazon ElastiCache large node can do 40k+ RPS throughput easily, this approach is far more performing than relying on database for everything.Refer the following benchmark from garantiadata to understand how memcached/elasticache can add performance to your architecture. http://garantiadata.com/blog/its-true-even-modest-datasets-can-enjoy-the-speediest-performance. 
Using Amazon ElastiCache distributed caching tier in your architecture, you reduce the overall read load in your database and increase the overall performance of your application. Request/response latency can be reduced to few milliseconds using Amazon ElastiCache in your architecture.Since Amazon ElastiCache is a volatile in nature and the items have TTL's it is recommended to fall back to DB or source data stores in event of cache miss.

 



Weekly Product Update: New Events API available for private beta [feedly]

GitHub handle emailed and events api beta awaited with baited breath!
 
 
Shared via feedly // published on The Mailgun Blog // visit site
Weekly Product Update: New Events API available for private beta

We believe in giving our customers full transparency into what is happening with their emails. That is why we show our customers what is happening with each and every email that gets sent or received through Mailgun with Logs.

Recently, we've been working on improving log data by giving customers even more data and enhancing the experience of retrieving that data. Like almost all features at Mailgun, the first interface we build is the API, with the GUI to follow. We are pleased to announce the private beta release of our Events API, which will be the foundation for Logs and other reporting in the future.

So how did we improve on Logs?

Better pagination

Previously, we just had simple skip and limit parameters to specify which portions of the logs you wanted to fetch. This was cumbersome and resulted in poor performance characteristics. With the new Events API, we've provided parameters that allow you to specify a time period in addition to the ability to specify ascending or descending. We also provide URLs in the response for retrieving the next and previous pages, giving you the ability to easily traverse log data.

Allow for detailed querying with filter expressions

Previously, you could only fetch all of the log data within the specified skip and limit parameters. Then you'd have to filter the data on your side. The Events API allows for filtering the data by a bunch of additional parameters and the ability to combine expressions in order to create complex queries.

Include more events

Previously, with the Logs API, we only included information that happens to your email until it is accepted by the recipient email server or dropped. You could get the other events through the Campaigns API but we thought it would better to include all events in one easy to use API. The Events API allows you to query for all events, including those that occur after delivery like opens, clicks, unsubscribes and spam complaints.

How much does this awesome API cost?

Free for you my friend.
For free accounts, we retain events for 2 days. For paid accounts, we retain events for 30 days. If you need retention for longer than that, contact us to discuss your needs.

How can I try out the new API?

If you are interested in playing with the Events API, feel free to send us an email: events-api@mailgun.net with your github handle and we'll give you access to the private docs.

We hope this release dramatically improves the experience of tracking your emails. Please provide any feedback!

Happy emailing!
Mailgunners


Dealing with your 6-months old backlog [feedly]

Nothing to do with a backlog just a funny cat gif for me!
 
 
Shared via feedly // published on DevOps Reactions // visit site
Dealing with your 6-months old backlog

image by Amanda


I, Revolution: OAuth is awesome, OAuth is horrible. [feedly]

OAuth is horrible in terms of being a consumer and a provider.
 
 
Shared via feedly // published on John Sheehan // visit site
I, Revolution: OAuth is awesome, OAuth is horrible.
I, Revolution: OAuth is awesome, OAuth is horrible.:

From my co-founder Frank Stratton:

Photo: Micro Ecosystem by Pierre Pocs

Introduction

After writing the Runscope API, and several OAuth API clients to other services. I've finally had some time to figure out how I feel about OAuth 2.0… OAuth is awesome, OAuth is horrible.

My co-founder John has already written a lot…


Tuesday, July 30, 2013

The Most Popular Pub Names [feedly]

This is what mongodb and big data is perfect for. especially if you are a Brit. Genius.

Cheers!
 
 
Shared via feedly // published on The MongoDB NoSQL Database Blog // visit site
The Most Popular Pub Names

By Ross Lawley, MongoEngine maintainer and Scala Engineer at 10gen

Earlier in the year I gave a talk at MongoDB London about the different aggregation options with MongoDB. The topic recently came up again in conversation at a user group, so I thought it deserved a blog post.

Gathering ideas for the talk

I wanted to give a more interesting aggregation talk than the standard "counting words in text", and as the aggregation framework gained shiny 2dsphere geo support in 2.4, I figured I'd use that. I just needed a topic…

What is top of mind for us Brits?

Two things immediately sprang to mind: weather and beer.

I opted to focus on something close to my heart: beer :) But what to aggregate about beer? Then I remembered an old pub quiz favourite…

What is the most popular pub name in the UK?

I know there is some great open data, including a wealth of information on pubs available from the awesome open street map project. I just need to get at it and happily the Overpass-api provides a simple "xapi" interface for OSM data. All I needed was anything tagged with amenity=pub within in the bounds of the UK and with their xapi interface this is as simple as a wget:

http://www.overpass-api.de/api/xapi?*[amenity=pub][bbox=-10.5,49.78,1.78,59]

Once I had an osm file I used the imposm python library to parse the xml and then convert it to following GeoJSON format:

{    "_id" : 451152,    "amenity" : "pub",    "name" : "The Dignity",    "addr:housenumber" : "363",    "addr:street" : "Regents Park Road",    "addr:city" : "London",    "addr:postcode" : "N3 1DH",    "toilets" : "yes",    "toilets:access" : "customers",    "location" : {        "type" : "Point",        "coordinates" : [-0.1945732, 51.6008172]    }  }

Then it was a case of simply inserting it as a document into MongoDB. I quickly noticed that the data needed a little cleaning, as I was seeing duplicate pub names, for example: "The Red Lion" and "Red Lion". Because I wanted to make a wordle I normalised all the pub names.

If you want to know more about the importing process, the full loading code is available on github: osm2mongo.py

Top pub names

It turns out finding the most popular pub names is very simple with the aggregation framework. Just group by the name and then sum up all the occurrences. To get the top five most popular pub names we sort by the summed value and then limit to 5:

db.pubs.aggregate([    {"$group":       {"_id": "$name",        "value": {"$sum": 1}       }    },    {"$sort": {"value": -1}},    {"$limit": 5}  ]);  
For the whole of the UK this returns:
  1. The Red Lion
  2. The Royal Oak
  3. The Crown
  4. The White Hart
  5. The White Horse

image

Top pub names near you

At MongoDB London I thought that was too easy, so filtered to find the top pub names near the conference and showing off some of the geo functionality that became available in MongoDB 2.4. To limit the result set match and ensure the location is within a 2 mile radius by using $centreSphere. Just provide the coordinates [ , ] and a radius of roughly 2 miles (3959 is approximately the radius of the earth, so divide it by 2):

db.pubs.aggregate([      { "$match" : { "location":                   { "$within":                     { "$centerSphere": [[-0.12, 51.516], 2 / 3959] }}}      },      { "$group" :         { "_id" : "$name",           "value" : { "$sum" : 1 } }      },      { "$sort" : { "value" : -1 } },      { "$limit" : 5 }    ]);  

What about where I live?

At the conference I looked the most popular pub name near the conference. Thats great if you happen to live in the centre of London but what about everyone else in the UK? So for this blog post I decided to update the demo code and make it dynamic based on where you live.

See: pubnames.rosslawley.co.uk

Apologies for those outside the UK - the demo app doesn't have data for the whole world - its surely possible to do.

Cheers

All the code is available in my repo on github including the bson file of the pubs and the wordle code - so fork it and start playing with MongoDB's great geo features!


How to Extend Your Wi-Fi Network With an Old Router [feedly]

Aha. I now know what to do with an old Belkin that's kicking around now
 
 
Shared via feedly // published on Lifehacker // visit site
How to Extend Your Wi-Fi Network With an Old Router

How to Extend Your Wi-Fi Network With an Old Router

When you upgrade to a faster, better router, don't toss your old one. Whether through stock or custom firmware, you can likely turn it into a repeater that can carry your Wi-Fi's signal to the dark corners of your home.

Read more...

    



Make the Most Bacon-y Burger with the Bacon Weave and Other Tricks [feedly]

I'm starving.
 
 
Shared via feedly // published on Lifehacker // visit site
Make the Most Bacon-y Burger with the Bacon Weave and Other Tricks

Make the Most Bacon-y Burger with the Bacon Weave and Other Tricks

If you're going to indulge in a bacon cheeseburger, you might as well go whole hog (sorry for the pun) and get the most bacon flavor out of it. Serious Eats shows us how to do that.

Read more...

    



Lazy loading social buttons and other performance tweaks at work today.

The end result means things are a little faster although I still have lots of optimizations to do. Getting there though!

This is what sharing it out on twitter looks like with opengraph tags etc.:


Monday, July 29, 2013

Only way to leave LinkedIn is to destroy LinkedIn HQ [feedly]

They are keen on sending emails....
 
 
Shared via feedly // published on The Daily Mash // visit site
Only way to leave LinkedIn is to destroy LinkedIn HQ
MEMBERSHIP of networking website LinkedIn can only be terminated by destroying its corporate headquarters, it has emerged. A LinkedIn spokesman said: "People have been emailing us, saying they lost interest in the site immediately after joining and have since attempted to delete their profile several dozen times. "Actually leaving the site is quite simple – [...]