Scott Hanselman

Review: Logitech ConferenceCam CC3000e - A fantastic pan tilt zoom camera and speaker for remote workers

July 07, 2014 Comment on this post [10] Posted in Remote Work | Reviews
Sponsored By

cc3000eI'm forever looking for tools that can make me a more effective remote worker. I'm still working remotely from Portland, Oregon for folks in Redmond, Washington.

You might think that a nice standard HD webcam is enough to talk to your remote office, but I still maintain that a truly great webcam for the remote work is one that:

  • Has a wide field of view > 100 degrees
  • Has an 5x - 10x optical zoom to look at whiteboards
  • Has motorized pan-tilt-zoom

Two years later I'm still using (and happy with) the Logitech BCC950. I'm so happy with it that I wrote and maintain a cloud server to remotely control the PTZ (pan tilt zoom function) of the camera. I wrote all that up earlier on this blog in Cloud-Controlled Remote Pan Tilt Zoom Camera API for a Logitech BCC950 Camera with Azure and SignalR.

Fast-forward to June of 2014 and Logitech offered to loan me (I'm sending it back this week) one of their new Logitech ConferenceCam C3000e conferencing systems. Yes, that's a mouthful.

To be clear, the BCC950 is a fantastic value. It's usually <$200, has motorized PTZ, a remote control, (also works my software, natch), doesn't require drivers with Windows (a great plus), is a REALLY REALLY good speakerphone for Skype or Lync calls, it's camera is 1080p, the speakerphone shows up as a standard audio device, and has a removable "stalk" so you can control how tall the camera is.

BUT. The BCC950's zoom function is digital which sucks for trying to see remote whiteboards, and it's field of view is just OK.

Now, enter the CC3000e, a top of the line system for conference room. What do I get for $1000? Is it worth 4x the BCC950? Yes, if you have the grand and you're on video calls all day. It's an AMAZING camera and it's worth it. I don't want to send it back.

Logitech ConferenceCam CC3000e - What do you get?

The unboxing is epic, not unlike an iPhone, except with more cardboard. It's a little overwhelming as there are a lot of parts, but it's all numbered and very easy to setup. My first impression was "why do I need all these pieces" as I'm used to the all-in-one-piece BCC950 but then I remembered that the CC3000e is actually meant for business conference rooms, not random remote workers in a home office like me. Still, later I appreciated the modularity as I ended up mounting the camera on top of an extra TV I had, while moving the speaker module under my monitor nearer my desk.

You get the camera, the speaker/audio base, a 'hockey puck' that routes all the cables, and a remote control.

The Good

You've seen what a regular webcam looks like. Two heads and some shoulders.

Skyping with a regular camera

Believe it or not, in my experience it's hard to get a sense of a person from just their disembodied head. Who knew?

I'm regularly Skyping/Lyncing into an open space in Redmond where my co-workers move around friendly, use the whiteboard, stand around, and generally enjoy their freedom of motion. If I've got a narrow 70 degree or less field of view with a fixed location, I can't get a feel for what's going on. From their perspective, none of them really know what my space looks like. I can't pace around, use a whiteboard, or interact with them in any "more than just a head" way.

Enter a real PTZ camera with real optics and a wide field of view. You really get a sense of where I am in my office, and that I need to suck it in before taking screenshots.

The CC3000e has an amazing wide field of view

Now, move the camera around.

The CC3000e has a remote control to turn it

Here's me trying to collaborate with my remote partners over some projects. See how painful that is? EVERY DAY I'm talking to half-heads with tiny cameras.

My co-worker's chin

Part of my co-workers' faces

Half my boss's head

These calls weren't staged for this blog post, people. FML. These are real meetings, and a real one-on-one with the half a forehead that is my boss.

Now, yes, I admit that you'll not ALWAYS want to see my torso when talking. Easy, I turn, and face the camera and zoom in a smidge and we've got a great 1:1 normal disembodied head conversation happening.

A bright HD Skype

But when you really want to connect with someone, back up a bit. Get a sense of their space.

A wide field of view shows you more context

And if you're in a conference room, darn it, mount that sucker on the far wall.

A wide field of view shows you the whole room

While only the me-sides of these calls used the CC3000e (as I'm the dude with the camera) I've used the other screenshots of actual calls I've had to show you the difference between clear optics and a wide field of view, vs. a laptop's sad little $4 web cam. You can tell who has a nice camera. Let me tell you, this camera is tight.

The CC3000e has a lot of great mounting options that come included with the kit. I was able to get it mounted on top of my TV like a Kinect, using the included brackets, in about 5 minutes. You can also mount it flat against the wall, which could be great for tight conference room situations.

1photo 2

The camera is impressive, and politely looks away when it's not in use. A nice privacy touch, I thought.

photo 4

The optical zoom is fantastic. You'll have no trouble zooming in on people or whiteboards.

Here's zoomed out.

Zoomed out

Here's zoomed in. No joke, I just zoomed in with the remote and made a face. It's crazy and it's clear.

Zoomed in

The speakerphone base is impressively sturdy with an awesome Tron light-ring that is blue when you're on a call, and red when you're either on hold (or you're the MCP.)

The screen will also show you the name/number of the current caller.

image

A nice bonus, you can pair the base with your cell phone using Bluetooth and now you've got a great speaker and speakerphone. This meant I could take all calls (mobile, Lync, Skype) using one speakerphone.

The Weird

There have been a few weird quirks with the CC3000e. For example - right this moment in fact - the camera on indicator light is flashing blue, but no app is using the camera. It's as if it got stuck after a call. Another is that the microphone quality (this is subjective, of course) for people who hear me on the remote side doesn't seem as deep and resonant as with the BCC950. Now, no conference phone will ever sound as nice as a headset, but the audio to my ear and my co-worker's ear is just off when compared to what we're used to. Also, a few times the remote control just stopped working for a while.

On the software side, I've personally found the Logitech Lync "Far End Control" PTZ software to be unreliable. Sometimes it works great all day, other days it won't run. I suspect it's having an isue communicating with the hardware. It's possible, given the weird light thing combined with this PTZ issue that I have a bad/sick review model. Now, here's the Far End Control Application's PDF Guide. It's supposed to "just work' of course. You and the person you're calling each run a piece of software that creates a tunnel over Lync and allows each of you to control the other's PTZ motor. This is a different solution than my PTZ system, as theirs uses Lync itself to transmit PTZ instructions while mine requires a cloud service.

Fortunately, my PTZ System *also* works with the ConferenceCam CC3300e. I just tested it, and you'll simply have to change the name of the device in your *.config file.

<appSettings>
<!-- <add key="DeviceName" value="BCC950 ConferenceCam"/> -->
<add key="DeviceName" value="ConferenceCam CC3000e Camera"/>
</appSettings>

To be clear, the folks at Logitech have told me that they can update the firmware and adjust and improve all aspects of the system. In fact, I flashed it with firmware from May 12th before I started using it. So, it's very possible that these are just first-version quirks that will get worked out with a software update. None of these issues have prevented my use of the system. I've also spoken with the developer on the Far End Control system and they are actively improving it, so I've got high hopes.

This is a truly killer system for a conference room or remote worker (or both, if I'm lucky and have budget.)

  • Absolutely amazing optical zoom
  • Top of the line optics
  • Excellent wide field of view
  • The PTZ camera turns a full 180 degrees
  • Programmable "home" location
  • Can act as a bluetooth speaker/speakerphone for your cell phone
  • The camera turns away from you when it's off. Nice reminder of your privacy.

The optics alone would make my experience as a remote worker much better. I am boxing it up and I am going to miss this camera. Aside from a few software quirks, the hardware is top notch and I'm going to be saving up for this camera.

You can buy the Logitech CC3000e from Logitech directly or from some shady folks at Amazon.

Related Links


Sponsor: Thanks to friends at RayGun.io. I use their product and LOVE IT. Get notified of your software’s bugs as they happen! Raygun.io has error tracking solutions for every major programming language and platform - Start a free trial in under a minute!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Catch up on all the videos from DotNetConf Spring 2014

July 04, 2014 Comment on this post [10] Posted in Learning .NET
Sponsored By

Did you miss out on DotNetConf when it streamed LIVE just a few weeks ago? Don't you worry, it's all recorded and online for you to stream or download!

We are happy happy to announce that we’re planning another .NET Conf to be happening in a few months, so keep tuned thru the .NET Conf Twitter account (Twitter: @dnetconf), or checking our .NET Conf site in the future: http://www.dotnetconf.net. Big thanks to Javier Lozano for all his work with the site and conference coordination.

Everything was recorded is is up here: http://channel9.msdn.com/Events/dotnetConf/2014

.NET Conf summary and recorded content

The .NET Conf 2014 was a two-day virtual event (June 25th-26th) focused on .NET technologies, covering application development for the desktop, mobile and cloud/server. It is hosted by the MVP community and Microsoft, bringing top speakers and great topics straight to your PC.

Below you can review all the delivered sessions and reach to their related recorded content.

Day 1 – .NET core and .NET in client/devices

clip_image001 State of .NET (Keynote) - Jay Schmelzer
Opening and overview of current .NET state and .NET on the Client side.

clip_image002

New Innovations in .NET Runtime

Andrew Pardoe

We're changing the way we execute code in the .NET Runtime. Hear about .NET Native, RyuJIT, and modern server strategy.

clip_image003

The Future of C#

Kevin Pilch-Bisson, Mads Torgersen

The Microsoft Managed Languages team has been focused on rebuilding the VB and C# compilers and editing experiences as part of Project "Roslyn". This effort has paved the way for these languages to continue evolving for many years to come. However, what does that future actually look like? We explore the editing experience, how public APIs may be used to write language-level extensions, as well as new language features.

clip_image004

Building Universal Windows Apps with XAML and C# in Visual Studio

Larry Lieberman

In April at Build 2014, Microsoft unveiled universal Windows apps, a new approach that enables developers to maximize their ability to deliver outstanding application experiences across Windows PCs, laptops, tablets, and Windows Phones. This means it's now easier than ever to create apps that share most of their code. Code can be shared using the new shared app templates, as well as by creating Portable class libraries. This session will walk through the development of a shared app and will discuss where it still makes sense to implement platform specific features.

clip_image005

.NET Native Deep Dive

Andrew Pardoe

Look inside the .NET Native compiler toolchain to understand how we enable .NET Windows Store apps to compile to self-contained native apps.

clip_image006

Fun with .NET - Windows Phone, LEGO Mindstorms, and Azure

Dan Fernandez

In this demo-packed session, we'll walk through building your first .NET controlled LEGO Mindstorm using Windows Phone. You'll learn about the LEGO EV3 API, how to control motors and read sensor data, and how to batch commands to the robot. Once we have a working, drivable robot, we'll switch to cloud-enabling the robot so that you can drive the robot remotely via a Web site hosted in Microsoft Azure.

clip_image007

Kinect for Windows

Ben Lower

We will take a look at what's new in Kinect for Windows v2 including the improvements in core sources like Infrared and Depth data.  We will also show how the new Kinect Studio enables Kinect development even while travelling via plane, train, or automobile (note: you should not dev and drive) and how Kinect Interactions can be used to add a new input modality to Windows Store applications.

clip_image008

What's New in XAML Platform & Tooling

Tim Heuer

Tim will do a lap around what is new to the Windows Phone 8.1 platform as well as a tour of the new XAML tooling in Visual Studio Update 2 for developers and designers.

`clip_image009

Developing Native iOS, Android, and Windows Apps with Xamarin

James Montemagmo (Xamarin)

Mobile continues to expand and evolve at a rapid pace. Users expect great native experiences in the palm of their hands on each and every platform. A major hurdle for developers today is that each platform has its own programming language and tools to learn and maintain. Even if you tackle the burden of learning Objective-C and Java you will still have to manage multiple code bases, which can be a nightmare for any development team large or small. It doesn't have to be this way as you can create Android, iOS, Windows Phone, and Windows Store apps leveraging the .NET framework and everything you love about C#.

clip_image010

What's new for WPF Developers

Dmitry Lyalin

Windows Presentation Foundation (WPF) enables .NET developers to build rich and powerful Windows desktop applications using managed languages and XAML. In this session we'll cover all the latest innovations available to WPF developers such as improvements coming from .NET, integration points with the latest cloud technologies and enhanced tooling & profiling capabilities in Visual Studio.

Day 2 – .NET in server and cloud

clip_image011

ASP.NET Today and Tomorrow (Keynote)

Scott Hunter

It's been an amazing decade for ASP.NET. Today in 2014, most all of ASP.NET is open source, developed in the open, and accepting community contributions. One ASP.NET and VS 2013 added some amazing new tooling enhancements for HTML5, CSS3 and JavaScript. VS2013.3 is coming soon with even more innovations as we march towards ASP.NET vNext. Join Scott Hunter as he shows you how it works together. What's available on ASP.NET today, and where is ASP.NET headed tomorrow, and what do you need to know to best support the code you've written and the code you will write. We'll also talk about the rise of the cloud and how it changes the way we write large systems. All this, plus a lot of open source, and deploying to Azure.

clip_image012

ASP.NET Web Forms

Scott Hunter, Pranav Rastogi

Do you want to learn techniques to enhance your Web Forms development experience. See how you can improve your code's maintainability and testability and your site's performance. Leverage new features in ASP.NET Web Forms 4.5 to reduce the amount of UI "yuck" code and focus on your application's logic. We will look at some of the improvements to Web Forms such as support for EF 6, new Scaffolders and more features which you might not have heard of. We will see how to leverage all of the latest tools in Visual Studio like Browser Link and Web Essentials to make their coding experience simpler, shorter, and more enjoyable. 

clip_image013

ASP.NET MVC 6 (now with integrated Web API!)

Daniel Roth

ASP.NET MVC and ASP.NET Web API in ASP.NET vNext are becoming one singular framework: ASP.NET MVC 6. Join Daniel Roth as he shows how to create great ASP.NET web apps that serve both pages and services. First we'll see how to build OData v4 compliant services using ASP.NET Web API 2.2 and the new attribute routing features available in ASP.NET MVC 5.2. Then we'll take a look at how ASP.NET MVC and Web API are being combined into a single framework, ASP.NET MVC 6, for handling all of your Web UI and services. We'll learn how to use ASP.NET MVC and Web APIs in ASP.NET vNext to support connected applications for browsers, Windows Phone, Windows Store and more!

clip_image014

Entity Framework (v6 and v7 preview)

Rowan Miller

Entity Framework is Microsoft's recommended data access technology for new applications in .NET. We'll explore how the current release of Entity Framework can be used to build applications. We'll also look at an early preview of EF7, a modern, lighter weight, and composable version of Entity Framework (EF) that can be used on a variety of platforms, including ASP.NET vNext, Windows Phone and Windows Store. This new version will also support targeting non-relational data stores.

clip_image015

Taking Your ASP.NET Apps to the Cloud with Microsoft Azure Web Sites

Brady Gaster

Web developers are seeing huge boosts in their productivity building Web Applications with ASP.NET, with so many huge improvements to Visual Studio focused on the problems web developers solve each day. We've also made some significant improvements in Microsoft Azure for web developers by concentrating on providing the community the best cloud in which to host ASP.NET web apps. Features like Auto-scaling and Traffic Management provide high-performance, internationally-distributed web hosting scenarios. We've made it easier than ever to add background processing by adding Azure WebJobs as an option for web developers who need to add a middle tier. Along with staging and production deployment slots, and a rich SDK to enable service automation - a feature many software-as-a-service apps can use to automate their provisioning and deployment experiences - there's no better place than Microsoft Azure Web Sites to host your ASP.NET apps.

clip_image016

ASP.NET Publishing Explained

Sayed Hashimi

The Visual Studio publishing experience for ASP.NET projects has been refined over the past few years. In this talk we will go into detail covering all the different techniques to publish your asp.net apps. We will start in Visual Studio, and quickly move to the command line and continuous integration servers. Sayed will you how you can improve your publish process to target multiple environments and how to automate publishing from a CI server. We will also take a look at some of the unique publish workflows that Azure Web Sites supports.

clip_image017

ASP.NET Identity

Pranav Rastogi

ASP.NET Identity is a totally rewritten framework that brings the ASP.NET membership system into the modern era. ASP.NET Identity makes it easier to integrate different authentication systems such as local username, password as well as social logins such as Facebook, Twitter etc. It also gives you greater control over persisting data to your backend technology of choice. ASP.NET Identity is a game changer by bringing in more modern authentication systems such as Two-Factor Authentication. You can use ASP.NET Identity to secure Web Apps as well as Web APIs.

clip_image018

Dependency Injection and Testability in .NET

Mani Subramanian, Francis Cheung

Testability is more important than ever. With short ship cycles and the desire for continuous delivery, it is critical to quickly know if a modification has destabilized your code base. This session will enable you to use a dependency injection container of your choice to create testable code. We will examine tightly coupled code and what problems it causes and how DI can be used to avoid these problems. The Unity DI container will be used as the medium to understand the concepts.

clip_image019

SignalR

Damian Edwards

SignalR is one of the latest additions to the ASP.NET web stack. It provides real-time HTTP support for your web applications, but the good news is that SignalR is useful outside of a web browser, too. With a client API that's virtually identical in both the JavaScript and native .NET client implementations, developers only need to learn the SignalR abstraction itself to be able to write cross-platform real-time applications. This session will walk through the process of adding real-time functionality to your Windows 8 and Windows Phone 8 apps. We'll also take a look at the scale-out providers and OWIN hosting capabilities available in the latest release of SignalR.

clip_image020

ASP.NET vNext 101

Damian Edwards, David Fowler

ASP.NET vNext is a lean and composable framework for building web and cloud applications. ASP.NET vNext is fully open source and available on GitHub. ASP.NET vNext is currently in preview, and in this talk Fowler and Edwards will put it all into Context. vNext apps can use a cloud-optimized subset of the .NET framework. This subset of the framework is about 11 megabytes in size compared to 200 megabytes for the full framework, and is composed of a collection of NuGet packages. What does that mean for compatibility? When would you choose vNext and when would you not? You don't have to use Visual Studio to develop ASP.NET vNext applications. You can develop and run vNext on platforms that Visual Studio doesn't run on. But Visual Studio provides the best development experience, and we'll cover ASP.NET vNext both inside and outside the IDE.

We encourage you to share this content with your colleagues and friends, and remember that .NET Conf and all its content is free!


Sponsor: Thanks to friends at RayGun.io. I use their product and LOVE IT. Get notified of your software’s bugs as they happen! Raygun.io has error tracking solutions for every major programming language and platform - Start a free trial in under a minute!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Diabetics: It's fun to say Bionic Pancreas but how about a reality check

June 30, 2014 Comment on this post [23] Posted in Diabetes
Sponsored By

A diagram outlining the complete bionic pancreas systemThe state of healthcare reporting is just abysmal. It's all link-bait. It's fun to write things like "Random Joe invents cure for diabetes in his garage, saves dying 5 year old." It's surely less fun to read them with you're the one with the disease.

IMPORTANT UPDATE: Scott (me) has now interviewed Dr. Steven Jon Russell, MD, PhD, a member of the Bionic Pancreas Team! Check out their interview at http://hanselminutes.com/431.

It's time for medical journalists to try a little harder and pushback against editors that write headlines optimized for pageviews. The thing is, I've met a dozen General Practitioners who are themselves confused about how diabetes works, and link-bait journalism just ruins it for the public, too. I've received no fewer than 50 personal emails or FB posts from well-meaning friends this last week. "Have you heard? They've cured your diabetes with a bionic pancreas!"

I have been a Type 1 Diabetic for 20 years, I've worn an insulin pump 24 hours a day for the last 15 years (that's over 130,000 hours, in case you're counting), I'm a diabetes off-label body hacker with an A1C of 5.5%. What's that mean to you? I'm not a doctor, but I'm a hell of a good diabetic.

I know what I'm talking about because I'm living it, and living it well. A doctor may be able to tell me to adjust my insulin every 3 months when I see them, but they aren't up with me at 4 am in a hotel in Germany with jet-lag telling me what to do when I'm having a low. Forgive me this hubris, but it comes from 75,000 finger pricks and yes, it hurts every time, and no, my insulin pump doesn't automatically cure me.

Last year the FDA approved an Insulin Pump that shuts off automatically if it detects the wearer is having a low sugar. The press and the company itself called this new feature an "artificial pancreas." Nonsense. It's WAY too early to call this Insulin Pump an Artificial Pancreas.

Now we are seeing a new "bionic" pancreas for which that the press is writing headlines like "A Father Has Invented a Bionic Organ to Save His Son From Type 1 Diabetes" and "Bionic Pancreas" Astonishes Diabetes Researchers."

It's a great proof concept for a closed system based on dual insulin pumps (one with glucagon) and a high accuracy CGM managed by an iPhone. But that's a not a fun headline, is it?

"Boston University biomedical engineer Ed Damiano and a team of other researchers published a study earlier this month detailing a system that could prevent these dangerous situations."

Indeed, the study in the New England Journal of Medicine where Ed Damiano, Ph.D. is listed alongside Steven J. Russell, M.D., Ph.D., Firas H. El-Khatib, Ph.D., Manasi Sinha, M.D., M.P.H., Kendra L. Magyar, M.S.N., N.P., Katherine McKeon, M.Eng., Laura G. Goergen, B.S.N., R.N., Courtney Balliro, B.S.N, R.N., Mallory A. Hillard, B.S., David M. Nathan, M.D.

They are clearly all brilliant and of note. Let's break the study down.

"...we compared glycemic control with a wearable, bihormonal, automated, “bionic” pancreas (bionic-pancreas period) with glycemic control with an insulin pump (control period) for 5 days in 20 adults and 32 adolescents with type 1 diabetes mellitus."

They are trying to improve blood sugar control. That means keeping my numbers as "normal" as possible to avoid the nasty side-effects like blindness and amputation in the long-term with highs, and death and coma with lows. The general idea is that since my actual pancreas isn't operating, I'll need another way to get insulin into my system. "Bihormonal" means they are delivering not just insulin, which lowers blood sugar, but also glucagon, which effectively raises blood sugar. They tested this for 5 days on a bunch of people.

"The device consisted of an iPhone 4S (Apple), which ran the control algorithm, and a G4 Platinum continuous glucose monitor (DexCom) connected by a custom hardware interface."

I use a DexCom G4, by the way. It's a lovely device and it gives me an estimate of my blood sugar every 5 minutes by drawing a parallel between what it detects in the interstitial fluid of my own fat and tissues (not my whole blood) and then sends it wirelessly to a handset. I currently then make calculations in my head and decide (Note that keyword: decide) how much insulin to take. I then manually tell my Medtronic Insulin Pump how much insulin to take. The DexCom must be calibrated at least twice daily with a whole blood finger stick. Also, it's not too accurate on day 1, and can be wholly inaccurate after it's listed 7 day effectiveness range. But it's that keyword that this project is trying to help with. Decide. I have to decide, calculate, guess, determine. That's hard for me as an adult. It's near-impossible for an 8 year old. Or an 80-year old. Computers are good at calculating, maybe it can do this tedious work for us.

It's two pumps, one with insulin, one with glucagon, and an iPhone controlling them both

The thing is, with Type 1 Diabetes there's dozens of other factors to consider. How much did I eat? What did I eat? Am I sick? Does my stomach work? Do I digest slowly? Quickly? Do I have any acetaminophen in my system? Am I going jogging afterwards? Is this insulin going bad? Is the insulin pump's cannula bent, and dozens (I'm sure I could come up with a hundred) of other factors. Read Lane Desborough's paper (PPT as a PDF) on "Applying STPA (System Theoretic Process Analysis) to the Artificial Pancreas for People with Type 1 Diabetes" for a taste of what needs to be done.

image

The brilliance of this system - this "bionic" pancreas - is this...and these are MY words, no one else's:

The two pump bionic pancreas system gives you rather a LOT of insulin if needed (as if it's descending a plane quickly and dramatically) then it pulls you up nicely with a bit of glucagon (as if the pilot screamed pull up as he noticed the altitude change).

It's the addition of the glucagon to get you out of lows that is interesting. Typically Diabetics have a big syringe of glucagon in the fridge for emergencies. If you're super low - dangerously loopy - your partner can get you out of it with a big bolus of glucagon. But if you put glucagon in an insulin pump, you can deliver tiny amounts and now you are are moving the graph in two directions.

Think I'm kidding about the "pull up, pull up" analogy?

Here's a snippet of a graph from page 15 of one of the Appendices (PDF). Note around 19:00, the blue bar going down, that's a lot of insulin. Then the BG numbers come down, FAST. Note the black triangle at around 20:20. That's "pull up, pull up" and a bolus of glucagon in red. And more, and more, in fact, there are many glucagon boluses keeping the numbers up, presumably happening while the subject sleeps. Then around 07:00 the numbers rise, presumably from the Dawn Effect, and another automatic insulin bolus (an overcorrection) and then more glucagon. It's a wonderfully controlled roller-coaster. This isn't using the word roller-coaster as a pejorative - that is the life I lead as a diabetic.

Pull up, pull up!

It's also not mentioned in the press that this system uses lot more insulin than I do today. A lot more, due to it's "dose and correct" algorithm's design.

"Among the other 11 patients, the mean total daily dose of insulin was 50% higher during the bionic-pancreas period than during the control period (P=0.001);"

UPDATE: I spoke to Dr. Russell, and I'm not entirely correct that this system uses a lot more insulin. The system didn't use much more insulin in diabetic kids who have very controlled diets, and was 50% higher in only some of the adults, presumably because (anecdotally) many of them were eating a lot more and "testing" the extents of the system.

I use about 40U a day, total. So we're looking at me using perhaps 60U a day with this system. As with any drug, though, insulin use has its side effects. It can cause fat deposits, scarring at injection sites, and we can become resistant to it. It'd be interesting to think about a study where someone's on 50% more insulin for years. Would that cause increases in any of these side effects? I don't know, but it's an interesting question. Should a closed system also optimize for doing its job with the minimum possible insulin. I optimize for that today, on my own, hoping that it will make a difference in the long run.

But, glucagon isn't pump friendly as it is today. An unfortunate note that isn't covered in any of the press is that they are having to replace the glucagon every day. Juxtapose that with what I do currently with insulin. I keep my pump filled and swap out its contents and cannula (insertion site) every 4-7 days. Insulin itself can surface ~28 days at room temperature although it's most often refrigerated. Changing one of the pumps daily is a bummer, as they point out.

"...the poor stability of currently available glucagon formations necessitated daily replacement of the glucagon in the pump with freshly reconstituted material."

It's early, people. It's not integrated, it's a proof of concept. It's impressive, to be sure, but Rube-Goldbergian in its hardware implementation. Two pumps, a Dexcom G4 inside a docking station, receiving BG data over RF from the transmitter, then the Dexcom wirelessly talking to an iPhone within another docking station.

"Since a single device that integrates all the components of a bionic pancreas is not yet available, we had to rely on wireless connectivity to the insulin and glucagon pumps, which was not completely reliable."

I'm not trying undermine, undercut, or minimize the work, it's super promising, but medical journalists need to seriously understand what's really going on here.

Fast forward a few years, and there will very likely be an bi-hormonal "double" pump with both (more stable) glucagon and insulin that combines with a continuous glucose meter that provides the average Type 1 Diabetic with a reasonable solution to keep their numbers out of imminent danger. Great for kids, a relief for many.

But, just as pumps are today, it'll be USD$5000 to USD$10000. It will require insurance, and equipment, it'll require testing and software, it'll require training, and it won't be - it can't be - perfect. This is a move forward, but it's not a cure. Accept it for what it is, a step in the right direction.

Do I want it? Totally. But, journalists and families of diabetics, let's not overreact or get too ahead of ourselves. Does this mean I should eat crap and the machine will take care of it? No. I'm healthy today because I care to be. I work at it. Every day. As I'm typing now, I know my numbers, my trend-line, and my goal: stay alive another day.

Read my article from 2001 - yes, that's 13 years ago - called One Guy, an Insulin Pump, and 8 PDAs:

"I imagine a world of true digital convergence -- assuming that I won't be cured of diabetes by some biological means in my lifetime -- an implanted pump and glucose sensor, an advanced artificial pancreas. A closed system for diabetics that automatically senses sugar levels and delivers insulin has been the diabetics' holy grail for years. But with the advent of wireless technology and the Internet, my already optimistic vision has brightened. If I had an implanted device with wireless capabilities, it could be in constant contact with my doctor. If the pump failed, it could simultaneously alert me, my doctor, and the local emergency room, downloading my health history in preparation for my visit. If it was running low on insulin, the pump could report its status to my insurance company, and I'd have new insulin delivered to my doorstep the next day. But that's not enough. With Bluetooth coming, why couldn't my [PDA] monitor my newly implanted smart-pump?"

Go an educate yourselves about the "We Are Not Waiting" movement. Hear how Scott Leibrand has a "DIY Artificial Pancreas" that's lowered his girlfriends average blood sugar dramatically using only an DexCom G4 and smart algorithms. You can make a change today, at your own risk, of course.

Read about the The DiabetesMine D-Data ExChange and how non-profit Tidepool is creating open source software and systems to make innovation happen now, rather than waiting for it. Get the code, join the conversation. Exercise, eat better, read, work. You can hack your Diabetes today. #WeAreNotWaiting

Related Links and Writings


Sponsor: Thanks to friends at RayGun.io. I use their product and LOVE IT. Get notified of your software’s bugs as they happen! Raygun.io has error tracking solutions for every major programming language and platform - Start a free trial in under a minute!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

NuGet Package of the Week: ASP.NET Web API Caching with CacheCow and CacheOutput

June 28, 2014 Comment on this post [8] Posted in ASP.NET Web API | NuGet | NuGetPOW
Sponsored By

You can see other cool NuGet Packages I've mentioned on the blog here. Today's NuGet package is CacheCow, which has possibly the coolest Open Source Library name since Lawnchair.js.

image

"CacheCow is a library for implementing HTTP caching on both client and server in ASP.NET Web API. It uses message handlers on both client and server to intercept request and response and apply caching logic and rules."

CacheCow was started by Ali Kheyrollahi with help from Tugberk Ugurlu and the community, and is a fantastically useful piece of work. I wouldn't be surprised to see this library start showing in more places one day.

As an aside, Ali, this would be a great candidate for setting up a free AppVeyor Continuous Integration build along with a badge showing that the project is building and healthy!

CacheCow on the server can manage the cache in a number of ways. You can store it in SQL Server with the EntityTagStore, or implement your own storage handler. You can keep the cache in memcached, Redis, etc.

Consider using a library like CacheCow if you're putting together a Web API and haven't given sufficient thought to caching yet, or if you're already sprinkling cache code throughout your business logic. You might already suspect that is going to litter your code but perhaps haven't gotten around to tidying up. Now is a good time to unify your caching.

As a very simple example, here's the HTTP Headers from an HTTP GET to a Web API:

Cache-Control: no-cache
Content-Length: 19
Content-Type: application/json; charset=utf-8
Date: Fri, 27 Jun 2014 23:22:10 GMT
Expires: -1
Pragma: no-cache

Here's the same thing after adding the most basic caching to my ASP.NET applications config:

GlobalConfiguration.Configuration.MessageHandlers.Add(new CachingHandler(GlobalConfiguration.Configuration));

The HTTP Headers with the same GET with CacheCow enabled:

Cache-Control: no-transform, must-revalidate, max-age=0, private
Content-Length: 19
Content-Type: application/json; charset=utf-8
Date: Fri, 27 Jun 2014 23:24:16 GMT
ETag: W/"e1c5ab4f818f4cde9426c6b0824afe5b"
Last-Modified: Fri, 27 Jun 2014 23:24:16 GMT

Notice the Cache-Control header, the Last-Modified, and the ETag. The ETag is weak as indicted by "W/" which means that this response is semantically equivalent to the last response. If I was caching persistently, I could get a strong ETag indicating that the cached response was byte-for-byte identical. Also, if the client was smart about caching and added If-Modified-Since or If-None-Match for ETags, the response might be a 304 Not Modified, rather than a 200 OK. If you're going to add caching to your Web API server, you'll want to make sure your clients respect those headers fully!

From ALI's blog, you can still use HttpClient in your clients, but you use WebRequestHandler as the message handler:

HttpClient client = new HttpClient(new WebRequestHandler()
{
CachePolicy = new RequestCachePolicy(RequestCacheLevel.Default)
});
var httpResponseMessage = await client.GetAsync(http://superpoopy);

Really don't want a resource cached? Remember, this is HTTP so, Cache-Control: no-cache from the client!

Of course, one of the most important aspects of caching anything is "when do I invalidate the cache?" CacheCow gives you a lot control over this, but you really need to be aware of what your actual goal is or you'll find things cache you don't want, or things not cached that you do.

  • Are you looking for time-based caching? Cache for 5 min after a DB access?
  • Are you looking for smart caching that invalidates when it sees what could be a modification? Invalidate a collection after a POST/PUT/DELETE?

Given that you're likely using REST, you'll want to make sure that the semantics of these caching headers and their intent is reflected in your behavior. Last-Modified should reflect realty when possible.

From the CacheCow Wiki, there's great features for both the Server-side and Client-side. Here's CacheCow.Server features

  • Managing ETag, Last Modified, Expires and other cache related headers
  • Implementing returning Not-Modified 304 and precondition failed 412 responses for conditional calls
  • Invalidating cache in case of PUT, POST, PATCH and DELETE
  • Flexible resource organization. Rules can be defined so invalidation of a resource can invalidate linked resources

and the CacheCow.Client features

  • Caching GET responses according to their caching headers
  • Verifying cached items for their staleness
  • Validating cached items if must-revalidate parameter of Cache-Control header is set to true. It will use ETag or Expires whichever exists
  • Making conditional PUT for resources that are cached based on their ETag or expires header, whichever exists

Another good ASP.NET caching library to explore is ASP.NET Web API "CacheOutput" by Filip Wojcieszyn. While it doesn't have an fun to say name ;) it's got some great features and is super easy to get started with. You can find CacheOutput with NuGet at

Install-Package Strathweb.CacheOutput.WebApi2

And you'll configure your caching options using the intuitive CacheOutput attributes like those you may have seen in ASP.NET MVC:

[CacheOutput(ClientTimeSpan = 100, ServerTimeSpan = 100)]
public IEnumerable<string> Get()
{
return new string[] { "value1", "value2" };
}

ASP.NET Web API CacheOutput has great getting started docs and clear easy to ready code.

So, you've got options. Go explore!

You can also pickup the Pro ASP.NET Web API book at Amazon. Go explore CacheCow or CacheOutput and support open source! If you find issues or feel there's work to be done in the documentation, why not do it and submit a pull request? I'm sure any project would appreciate some help with updated samples, quickstarts, or better docs.


Sponsor: Many thanks to our friends at Octopus Deploy for sponsoring the feed this week. Did you know that NuGet.org deploys with Octopus? Using NuGet and powerful conventions, Octopus Deploy makes it easy to automate releases of ASP.NET applications and Windows Services. Say goodbye to remote desktop and start automating today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Trying Redis Caching as a Service on Windows Azure

June 25, 2014 Comment on this post [15] Posted in Azure
Sponsored By

redis_logo

First, if you have already have an MSDN subscription (through your work, whatever) make sure to link your MSDN account and an Azure Account, otherwise you're throwing money away. MSDN subscribers get between US$50 and US$150 a month in free Azure time, plus a 33% discount on VMs and 25% off Reserved Websites.

Next, log into the Azure Preview Portal at https://portal.azure.com.  Then, go New | Redis Cache to make a new instance. The Redis Cache is in preview today and pricing details are here. both 250 meg and 1 GB caches are free until July 1, 2014 so you've got a week to party hard for free.

image

Of course, if you're a Redis expert, you can (and always could) run your own VM with Redis on it. There's two "Security Hardened" Ubuntu VMs with Redis at the MS Open Tech VMDepot that you could start with.

I put one Redis Cache in Northwest US where my podcast's website is.  The new Azure Portal knows that these two resources are associated with each other because I put them in the same resource group.

image

There's Basic and Standard. Similar to Website's "basic vs standard" it comes down to Standard you can count on, it has an SLA and replication setup. Basic doesn't. Both have SSL, are dedicated, and include auth. I'd think of Standard as being "I'm serious about my cache" and Basic is "I'm messing around."

There are multiple caching services (or Cache as a Service) on Azure.

  • Redis Cache: Built on the open source Redis cache. This is a dedicated service, currently in Preview.
  • Managed Cache Service: Built on AppFabric Cache. This is a dedicated service, currently in General Availability.
  • In-Role Cache: Built on App Fabric Cache. This is a self-hosted cache, available via the Azure SDK.

Having Redis available on Azure is nice since my startup MyEcho uses SignalR and SignalR can use Redis as the backplane for scaleout.

Redis Server managing SignalR state

Marc Gravell (with a "C") over at StackExchange/StackOverflow has done us all a service with the StackExchange.Redis client for .NET on NuGet. Getting stuff in and out of Redis using .NET is very familiar to anyone who has used a distributed Key Value store before.

  • BONUS: There's also ServiceStack.Redis from https://servicestack.net that includes both the native-feeling IRedisNativeClient and the more .NET-like IRedisClient. Service Stack also supports Redis 2.8's new SCAN operations for cursoring around large data sets.
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

IDatabase cache = connection.GetDatabase();

// Perform cache operations using the cache object...
// Simple put of integral data types into the cache
cache.StringSet("key1", "value");
cache.StringSet("key2", 25);

// Simple get of data types from the cache
string key1 = cache.StringGet("key1");
int key2 = (int)cache.StringGet("key2");

In fact, the ASP.NET team announced just last month the ASP.NET Session State Provider for Redis Preview Release that you may have missed. Also on NuGet (as a -preview) this lets you point the Session State of your existing (perhaps legacy) ASP.NET apps to Redis.

After pushing and pulling data out of Redis for a while, you'll notice how nice the new dashboard is. It gives you a great visual sense of what's going on with your cache. You see CPU and Memory Usage, but more importantly Cache Hits and Misses, Gets and Sets, as well as any extraordinary events you need to know about. As a managed service, though, there's no need to sweat the VM (or whatever) that your cache is running on. It's handled.

image

From the Azure Redis site:

Perhaps you're interested in Redis but you don't want to run it on Azure, or perhaps even on Linux. You can run Redis via MSOpenTech's Redis on Windows fork. You can install it from NuGet, Chocolatey or download it directly from the project github repository. If you do get Redis for Windows (super easy with Chocolatey), you can use the redis-cli.exe at the command line to talk to the Azure Redis Cache as well (of course!).

It's easy to run a local Redis server with redis-server.exe, test it out in develoment, then change your app's Redis connection string when you deploy to Azure.


Sponsor: Many thanks to our friends at Octopus Deploy for sponsoring the feed this week. Did you know that NuGet.org deploys with Octopus? Using NuGet and powerful conventions, Octopus Deploy makes it easy to automate releases of ASP.NET applications and Windows Services. Say goodbye to remote desktop and start automating today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.