Hardware review of the Hewlett-Packard ProLiant N36L Microserver

[flickr-photo:id=5204509633,size=m]

Low-power systems are popular with enthusiasts everywhere. From the Linksys NSLU2 (thoughtfully also known as “the slug”), and the various Marvell SheevaPlug devices, there isn’t a shortage of options. With all of them, however, you need to make compromises—be it having to deal with ARM’s tics, lack of I/O expansion, bad performance, or lackadaisical manufacturers.

If you’re willing to compromise on: size, but still be much smaller than your average PC; power, but also consume less power than your average PC; performance, but still run circles around an ARM-based device—then take a look at the Hewlett-Packard ProLiant N36L “Microserver”. Introduced September 2010, reviews and photos of this system are few and far between. In this article, I review the hardware aspects of the N36L, while in another, I review its software aspects [coming soon].

Internals

[flickr-photo:id=5204528809,size=m]

The N36L is powered by an x86-based AMD Athlon II Neo processor running at 1.3 GHz intended for low-power systems like netbooks. While it has a slower clockspeed, this AMD CPU typically benchmarks faster than Intel’s Atom 1.6 GHz CPU. For the enterprise crowd, the Athlon II Neo is a 64-bit processor and supports hardware-accelerated virtualization and nested paging. This CPU is ideal for partitioning lightly-used services into lightweight VMs. With two DDR3 DIMM slots, the N36L can accommodate up to 8 GiB of RAM.

Graphics is provided by an integrated ATI Mobility Radeon HD 4200 (which also supports GPGPU/OpenCL via proprietary drivers), and the Gigabit NIC is a Broadcom NetXtreme BCM5723.

[flickr-photo:id=5205136774,size=m]

The mainboard provides a respectable amount of expansion. It has two PCIe slots, an x16 (you could easily use a discrete graphics card, though you’d have to be picky about dimensions) and an x1. Adjacent the x1 slot is an x4 slot, supposedly for use with HP’s proprietary management card. You could probably hack a conventional x4 card into the slot, but I rather HP have made the x4 slot usable and used the x1 slot for it’s proprietary add-ons (does a management card really need more than PCIe x1?).

The chassis’ disk racks connect via a mini-SAS connector. There’s one internal SATA connector for the 5.25” bay, but the system’s eSATA connector faces outward so your dreams of easily putting six drives in this tiny system are dashed.

There’s an internal USB 2.0 port, a common feature on servers. It makes running an OS off a USB flash drive that much easier—sequestered internally, such a drive won’t accidentally get knocked off.

Externals

The frontside of the N36L is… “server-like”, whatever that means. Along the top are LED indicators for disk and network activity, as well as the system’s backlit power button. There are four USB 2.0 ports along the right side, and an HP logo that glows blue when the system is on. The chassis door is metal (not plastic!), and has a lock.

[flickr-photo:id=5204512873,size=m]

The backside of the N36L is austere. The only ports: two USB 2.0 ports, one D-sub VGA port, a Gigabit Ethernet port, and one eSATA port. There’s a security Kensington lock slot, as well as an “expander slot” for HP’s proprietary management card. The power supply, fortunately, is integrated (power bricks are a pet peeve of mine), and uses a standard AC power cord.

There are two fans: a 120 mm fan for the system’s main cooling, and a 40 mm fan internal to the PSU. Fortunately, both are quiet; HP rates the system at 21 dB. It’s not silent, but it is quiet. There are no top or side vents; air is drawn in through the front and exhausted out the back.

[flickr-photo:id=5204520585,size=m]

Unlike other PCs, the N36L does not use Phillips-head screws for the user-accessible bits. Two sizes of Torx screws are used (I’m unsure of the size), and HP was pleasant enough to include a Torx screwdriver that snaps into the inside of the machine’s front door. Screws for hard disks and the optical disk drive are also screwed into convenient holes in the front door—no little baggies of screws to lose here! There is a single thumbscrew on the top-back to remove the top cover, and two thumbscrews hold the motherboard plate in place.

[flickr-photo:id=5204523307,size=m]

Other than the handle mechanism which has a metal spring, the N36L’s disk caddies are simple plastic affairs. The plastic does not appear to be particularly high quality, but since the only purpose of the things is to hold disks (and not face the environment), it probably good enough.

How much power does the N36L consume? Using my Kill-a-Watt, I measured 60 W on startup, which settled down to 45 W or so after booting and idling. This unfortunately is a much more than I’d have liked, but with four spinning disks I suppose it’s reasonable.

Cons

I’m not trying to be pessimist by not including a Pros list, but honestly, if you need one at this point you probably don’t need this machine. However, there are some cons I found annoying:

  • Low height clearance for RAM. I found this out the hard way when my heatspreader-equipped DIMMs would not fit
  • In the USA, at least, the N36L ships with 1 GiB of RAM, and either 160 GB or 250 GB disk… Which I immediately tossed for 8 GiB of RAM and four Western Digital 2 TB Green series disks. HP could have easily knocked $50 off the price by not including RAM and disk.
  • No SATA cable included for the 5.25” bay. This is a minor quip, and was probably done to save that last extra $0.50—but it makes the decision to include the RAM and the disk seem that much more strange.

Conclusion

Why did I get an N36L? The short list:

  • It’s x86-based. I didn’t want to muck about with ARM—its benefits for me are few.
  • It can hold four 3.5” disks, and with Gigabit Ethernet functions as a great, inexpensive NAS
  • Does not come with an operating system—yes, you are NOT paying Microsoft’s Windows tax! Also, all of the hardware in the N36L is well-supported by Linux and free software. Most other systems in this class force a Windows Home Server license on you.
  • Cheap. I bought the N36L for $320 USD. It’s more expensive than most ARM-based alternatives, but simultaneously much more powerful.

If you’re looking for photos, see my HP ProLiant N36L set on Flickr. And, if you liked this article, please support this site and consider buying the N36L via affiliate link through Amazon (which only has the 250 GB disk model) or Newegg (which has both the 160 GB model and 250 GB model).

Like this article? Please support my writing! Flattr my blog (see my thoughts on Flattr), tip me via PayPal, or send me an item from my Amazon wish list.

Comments

tommyguy's picture
Great review! I really consider buying this server, but hasn't found any proper review before now. Is it possible to use regular consumer SATA drives in this box, or is HP LFF the only one that fits? It's hard to tell with the tiny depth of field in the image. And which chipset is used? I found a page claiming the NB to be AMD 785E, but I haven't been able to figure out the SB. I guess it is either SB700 or SB710. Or maybe the server-series (SP5100)? Is the mini-SAS plug driven by the SB or an additional controller? Would you say the airflow through the hard drive bay is sufficient? I think the drives seems pretty tightly stacked, or am I wrong?
Samat's picture

LFF” is some kind of HP jargon. But it means your typical, consumer 3.5” disk.

I don’t have the N36L on-hand anymore to check the chips in use — and sorry, the motherboard photo IS a little blurry to read off the chip names. I was saving the lspci listing for a software review, but here it is anyway:

13:45:24 $ sudo lspci -v                                                                
00:00.0 Host bridge: Advanced Micro Devices [AMD] RS880 Host Bridge
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 0
        Capabilities: [c4] HyperTransport: Slave or Primary Interface
        Capabilities: [54] HyperTransport: UnitID Clumping
        Capabilities: [40] HyperTransport: Retry Mode
        Capabilities: [9c] HyperTransport: #1a
        Capabilities: [f8] HyperTransport: #1c

00:01.0 PCI bridge: Hewlett-Packard Company Device 9602 (prog-if 00 [Normal decode])
        Flags: bus master, 66MHz, medium devsel, latency 64
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=64
        I/O behind bridge: 0000e000-0000efff
        Memory behind bridge: fe700000-fe8fffff
        Prefetchable memory behind bridge: 00000000fc000000-00000000fdffffff
        Capabilities: [44] HyperTransport: MSI Mapping Enable+ Fixed+
        Capabilities: [b0] Subsystem: Hewlett-Packard Company Device 1609

00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
        Memory behind bridge: fe900000-fe9fffff
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Root Port (Slot-), MSI 00
        Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit-
        Capabilities: [b0] Subsystem: Hewlett-Packard Company Device 1609
        Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+
        Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 
        Capabilities: [110] Virtual Channel
        Kernel driver in use: pcieport

00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] (rev 40) (prog-if 01 [AHCI 1.0])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 41
        I/O ports at d000 [size=8]
        I/O ports at c000 [size=4]
        I/O ports at b000 [size=8]
        I/O ports at a000 [size=4]
        I/O ports at 9000 [size=16]
        Memory at fe6ffc00 (32-bit, non-prefetchable) [size=1K]
        Capabilities: [50] MSI: Enable+ Count=1/4 Maskable- 64bit+
        Capabilities: [70] SATA HBA v1.0
        Capabilities: [a4] PCI Advanced Features
        Kernel driver in use: ahci

00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller (prog-if 10 [OHCI])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 18
        Memory at fe6fe000 (32-bit, non-prefetchable) [size=4K]
        Kernel driver in use: ohci_hcd

00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller (prog-if 20 [EHCI])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17
        Memory at fe6ff800 (32-bit, non-prefetchable) [size=256]
        Capabilities: [c0] Power Management version 2
        Capabilities: [e4] Debug port: BAR=1 offset=00e0
        Kernel driver in use: ehci_hcd

00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller (prog-if 10 [OHCI])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 18
        Memory at fe6fd000 (32-bit, non-prefetchable) [size=4K]
        Kernel driver in use: ohci_hcd

00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller (prog-if 20 [EHCI])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17
        Memory at fe6ff400 (32-bit, non-prefetchable) [size=256]
        Capabilities: [c0] Power Management version 2
        Capabilities: [e4] Debug port: BAR=1 offset=00e0
        Kernel driver in use: ehci_hcd

00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 41)
        Flags: 66MHz, medium devsel
        Kernel driver in use: piix4_smbus

00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller (rev 40) (prog-if 8a [Master SecP PriP])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17
        I/O ports at 01f0 [size=8]
        I/O ports at 03f4 [size=1]
        I/O ports at 0170 [size=8]
        I/O ports at 0374 [size=1]
        I/O ports at ff00 [size=16]
        Kernel driver in use: pata_atiixp

00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller (rev 40)
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 0

00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (rev 40) (prog-if 01 [Subtractive decode])
        Flags: bus master, 66MHz, medium devsel, latency 64
        Bus: primary=00, secondary=03, subordinate=03, sec-latency=64

00:16.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller (prog-if 10 [OHCI])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 18
        Memory at fe6fc000 (32-bit, non-prefetchable) [size=4K]
        Kernel driver in use: ohci_hcd

00:16.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller (prog-if 20 [EHCI])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17
        Memory at fe6ff000 (32-bit, non-prefetchable) [size=256]
        Capabilities: [c0] Power Management version 2
        Capabilities: [e4] Debug port: BAR=1 offset=00e0
        Kernel driver in use: ehci_hcd

00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration
        Flags: fast devsel
        Capabilities: [80] HyperTransport: Host or Secondary Interface

00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map
        Flags: fast devsel

00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller
        Flags: fast devsel

00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control
        Flags: fast devsel
        Capabilities: [f0] Secure device 
        Kernel driver in use: k10temp

00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control
        Flags: fast devsel

01:05.0 VGA compatible controller: ATI Technologies Inc M880G [Mobility Radeon HD 4200] (prog-if 00 [VGA controller])
        Subsystem: Hewlett-Packard Company Device 1609
        Flags: bus master, fast devsel, latency 0, IRQ 18
        Memory at fc000000 (32-bit, prefetchable) [size=32M]
        I/O ports at e000 [size=256]
        Memory at fe8f0000 (32-bit, non-prefetchable) [size=64K]
        Memory at fe700000 (32-bit, non-prefetchable) [size=1M]
        Expansion ROM at  [disabled]
        Capabilities: [50] Power Management version 3
        Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Kernel driver in use: radeon

02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet PCIe (rev 10)
        Subsystem: Hewlett-Packard Company NC107i Integrated PCI Express Gigabit Server Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 42
        Memory at fe9f0000 (64-bit, non-prefetchable) [size=64K]
        Capabilities: [48] Power Management version 3
        Capabilities: [40] Vital Product Data
        Capabilities: [60] Vendor Specific Information: Len=6c 
        Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [cc] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [13c] Virtual Channel
        Capabilities: [160] Device Serial Number d4-85-64-ff-fe-6a-6c-4d
        Capabilities: [16c] Power Budgeting 
        Kernel driver in use: tg3

I can’t tell the northbridge from this. The southbridge is an “AMD SB700/SB800”. There doesn’t appear to be an additional SATA controller, so it must be driven by the southbridge. Everything worked out of the box with Debian GNU/Linux 6.0 (except the Broadcom NIC, which needed firmware from non-free) so I didn’t care to look into any of these things.

I thought airflow through the drive bays was fine. With some ¼” space between disks (more than other racks I’ve seen), they’re not tightly stacked, and drives feel lukewarm to the touch.

tommyguy's picture
Thanks a lot! NB seems to be RS880. Probably AMD 785G or 880G. They are both neat NB's, so it is a pretty good setup. SB700/800 doesn't tell much, but as long as the Mini-SAS comes from the SB it's OK. Non-free NIC-firmware may be a problem in some cases, but it seems to be OK in FreeBSD 8.1 according to this thread: http://hardforum.com/showthread.php?t=1555868 Did you solve it with aptitude or "pro hacking"? It is good to hear that the drives get appropriate cooling. That is probably the most important issue for me.
Samat's picture

Certain Linux distributions (e.g. Debian) and operating systems (e.g. OpenBSD) don’t distribute binary firmware drivers by default. On Debian, at least, you can install them with aptitude install firmware-linux, or following special steps during installation.

Other OSes, like Ubuntu, and as you pointed out, FreeBSD, are more loose with their definition of freedom and don’t have this issue.

sebus's picture

USB issue... Kingspec USB SSD 8 Gb

Server works great with Openfiler 2.3 installed on Kingston DataTraveller 8Gb USB stick

I tried to use Kingspec USB SSD 8 Gb stick & during the post the unit hangs on:

Auto-detecting USB Mass Storage Devices...
Device #01:

and never goes any further until... I pull the Kingspec out & put back in

This way the unit carries on fine

Obviously it is a problem with HP BIOS/Kingspec combo
Most likely BIOS update could sort this (one can hope...)

So while it works, it is a bugger for server restart (one need to get physically to the unit, so remote reboot is a no go)

sebus's picture
Using ChipGenius shows that there is JMicron tool available for Kingspec SSD 8Gb stick http://bbs.ocer.net/thread-275186-1-1.html Running the tool & changing the settings on Device Settings to default makes the drive lose the red activity light, but makes it work perfectly Seb
Alan Pio's picture

Hi,

I have recently purchased a HP Proliant Micro Server for my business. I need some help on which Hard drives to buy and are compatible with this server. I saw you mentioned the WD green series. Any particular make or model number would be appreciated on the first time buy or can i buy any of their models within this range?

Thanks
Alan

Samat's picture

The N36L can use any 3.5" SATA-II disk; get the best you can afford—for non-OS storage, I've been pleased with WD's Green Series line, especially with their support for 4 KB sectors (advertised as "Advanced Format").

Joel Levine's picture

Can 3 Tb drives be used? I know that one cannot boot from the 3 Tb but how about as the non boot device?

Samat's picture

No idea, I don't have 3 TB disks to test with… but I don't see why not?

The N36L's BIOS also supports booting from GPT partition tables (I discuss it in my software review, should I ever finish it one day), which is what you'd need for booting from 3 TB or larger disks. So, in theory, the N36L should support booting from these large disks as well.

JW's picture

I am strongly motivated to consider buying one or more of these units, however I have a few questions:

>I would probably start small and use pairs of disks with RAID1, two sets per server, however I missed information on BIOS limitations, specifically limitations on maximum disk space as imposed by LBA Limitations 2.1TB. Does this server suffer this limitation or can I buy 3TB or 4TB disks and use them without issue? This consideration may also speak to use of newer disks within LBA limitations buy using larger sectors and compatibility mode.

>Benefits of performance are lost when the Gbit adapter becomes the bottleneck. Are there any suitable and inexpensive single or multiple port PCI-E 1Gbit or 10Gbit adapters that can be added?

>What is the practical limits of the PCI-E connecters, e.g. are they PCI-E V1 or V2, would an adapter such as a USB3 adapter be bottlenecked?

>I have found little information on performance when investigating NAS. Most performance statistics show performance is only marginally better than USB2 drives in certain scenarios such as read or write of large volumes of small files and significant numbers of connections and concurrent access. Do you have performance metrics for various configurations 1-4 disks, RAID0 and or RAID1 in various scenarios.

>Power Consumption as you indicate, seems a little high! Are there power saving features either internal to disks, or in the BIOS, or perhaps builf into an OS like OpenFiler or RHEL/CentOS/Scientific Linux that may be activated to mitigate Power consumption of both Disks and other other components internally. Perhaps it may be possible to establish a profile of usage so the server may go into hibernation say overnight or in intervals of minimal activity and resume as needed.

>I had considered the possibility of using a small SSD in place of the optical drive in addition to up to 4 standard disks and a USB Memory Stick for system, to provide a fast access secondary cache (secondary to system memory). Any tips on setting this up?

> As an alternative to VMware Server or Oracle Virtual Box or KVM, I had considered using a hypervisor, to allow for the running of multiple OS instances at best possible speed. I could consider RHEL and RHEL Hypervisors or alternatives such as VMware's free product or Microsofts Hyper-V product. Is there anyone who has used any of these products on this server who can provide guidance and feedback on their experience!

> I consider the possibility that with HP backing out of the Tech market and becoming a service only company, that support for the device may be limited. That being the case, when eventually this product breaks down, it may not be possible to continue using it. I have considered that it may then be desirable to replace the mainboad/CPU/memory with a more powerful solution. While for me this may be a year or two before it becomes necessary, has anyone attempted it yet or considered it and identified alternative components?

>Any other comments or advice?

Thanks in advance for any comments and feedback!

Samat's picture

>I would probably start small and use pairs of disks with RAID1, two sets per server, however I missed information on BIOS limitations, specifically limitations on maximum disk space as imposed by LBA Limitations 2.1TB. Does this server suffer this limitation or can I buy 3TB or 4TB disks and use them without issue? This consideration may also speak to use of newer disks within LBA limitations buy using larger sectors and compatibility mode.

My HP N36L recognizes a 3 TB disk, and when formatted with GPT Linux works fine with it. I have not attempted to boot from it, however, and I doubt it'd work.

>Benefits of performance are lost when the Gbit adapter becomes the bottleneck. Are there any suitable and inexpensive single or multiple port PCI-E 1Gbit or 10Gbit adapters that can be added?

>What is the practical limits of the PCI-E connecters, e.g. are they PCI-E V1 or V2, would an adapter such as a USB3 adapter be bottlenecked?

>I have found little information on performance when investigating NAS. Most performance statistics show performance is only marginally better than USB2 drives in certain scenarios such as read or write of large volumes of small files and significant numbers of connections and concurrent access. Do you have performance metrics for various configurations 1-4 disks, RAID0 and or RAID1 in various scenarios.

>Power Consumption as you indicate, seems a little high! Are there power saving features either internal to disks, or in the BIOS, or perhaps builf into an OS like OpenFiler or RHEL/CentOS/Scientific Linux that may be activated to mitigate Power consumption of both Disks and other other components internally. Perhaps it may be possible to establish a profile of usage so the server may go into hibernation say overnight or in intervals of minimal activity and resume as needed.

>Power Consumption as you indicate, seems a little high! Are there power saving features either internal to disks, or in the BIOS, or perhaps builf into an OS like OpenFiler or RHEL/CentOS/Scientific Linux that may be activated to mitigate Power consumption of both Disks and other other components internally. Perhaps it may be possible to establish a profile of usage so the server may go into hibernation say overnight or in intervals of minimal activity and resume as needed.

> As an alternative to VMware Server or Oracle Virtual Box or KVM, I had considered using a hypervisor, to allow for the running of multiple OS instances at best possible speed. I could consider RHEL and RHEL Hypervisors or alternatives such as VMware's free product or Microsofts Hyper-V product. Is there anyone who has used any of these products on this server who can provide guidance and feedback on their experience!

These all have open-ended replies, and are not N36L-specific (i.e. off-topic)! Please do your own research.

>I had considered the possibility of using a small SSD in place of the optical drive in addition to up to 4 standard disks and a USB Memory Stick for system, to provide a fast access secondary cache (secondary to system memory). Any tips on setting this up?

I actually use an SSD as a boot device in my N36L, and it works great. Doing a write-up is on the list of things to do…

> I consider the possibility that with HP backing out of the Tech market and becoming a service only company, that support for the device may be limited. That being the case, when eventually this product breaks down, it may not be possible to continue using it. I have considered that it may then be desirable to replace the mainboad/CPU/memory with a more powerful solution. While for me this may be a year or two before it becomes necessary, has anyone attempted it yet or considered it and identified alternative components?

You've misread HP's announcement—HP is backing out of the consumer PC market. Branded as part of HP's business-oriented ProLiant line of servers, it's unlikely they'd discontinue support for the N36L and there has been no indication otherwise. However, I obviously am not HP and cannot predict what they'll do.