2541|3

6423

帖子

17

TA的资源

版主

楼主
 

NVMe与AHCI对比-Overview [复制链接]

The differences between an interface designed as an adaptive, translational layer(AHCI) and an interface designed as a simple device adaptive layer (NVMe) drivefundamental architectural and design features that are inherently encompassed in therespective interface specifications. These differences are in addition to various featuresthat are included in a specification designed to expose characteristics and attributes ofthe underlying attached device technology.



此帖出自FPGA/CPLD论坛
点赞 关注
个人签名training
 

回复
举报

6423

帖子

17

TA的资源

版主

沙发
 
AHCI

Inherent in the problem of designing AHCI, in fact many interfaces, is that it serves asthe logical element that tie two physical buses together, the system interconnect,PCI/PCIe, and the storage subsystem interconnect, SATA.



Adapters built on such interfaces typically provide logical translation services betweenthe two sides of the adapter. The electrical signaling and the physical transports andassociated protocols on either side of the adapter are different, PCI/PCIe on the hostside and SATA on the storage device side. The resulting logical transport and protocolimplemented by the adaptor and used to bridge between the two physical interconnectsis then shaped by the underlying physical and link layer elements of the buses which itconnects.

At the time AHCI was conceived and designed the only devices connected via SATAwere IO devices such as hard drives, optical drives, and other IO peripheral devices thatwere slow as compared to the processor-memory complex of the platform. Given thisperformance disparity an aggregation point in the system topology was needed, servingas a protocol/transport translator and as an elasticity buffer and alleviating theprocessor from managing this disparity. An HBA using AHCI as the host side interfaceand SATA as the device side interface serves this purpose quite well. Thisaggregation/translation point serves to smooth the bandwidth and latency disparitybetween the SATA storage interconnect and the system interconnect, in this case PCIor PCIe.


In providing such functionality latency is introduced into device access. For the originallyintended environment of AHCI this is not an issue as the additional latency of theaggregation point, the HBA, is a miniscule component of the overall device access pathlatency.


AHCI serves its intended architecture and design goals well. The host is able to keepattached SATA storage devices busy while minimizing the effort needed to manage thedevices and take full advantage of additional features that they may offer.



此帖出自FPGA/CPLD论坛
个人签名training
 
 

回复

6423

帖子

17

TA的资源

版主

板凳
 
NVMe

The deployment of SSDs, with performance capabilities orders of magnitude greaterthan previous storage devices, especially the low latency characteristics of the devices,drove the transition from physical attachments based on traditional storage busses tophysical interconnects more closely tied to the processor-memory complex, namely thePCIe bus.




With a storage device moving from a legacy storage interconnect to the low latencysystem interconnect the need for a new storage device interface that could span boththe storage domain and function equally well within the system interconnect domain andunlock the full potential of these new devices was required. NVMe is that new interface.


The interface was also designed to be highly parallel and highly scalable. Thescalability, parallelism and inherent efficiency of NVMe allow the interface to scale upand down in performance without losing any of the benefits. These features allow theinterface to be highly adaptable to a wide variety of system configurations and designsfrom laptops to very high end, highly parallel servers.


Another important feature of the NVMe interface is its ability to support the partitioningof the physical storage extent into multiple logical storage extents, each of which can beaccessed independently of other logical extents. These logical storage extents arecalled Namespaces. Each NVMe Namespace may have its own pathway, or IOchannel, over which the host may access the Namespace. In fact, multiple IO channelsmay be created to a single Namespace and be used simultaneously (Note that an IO channel, i.e. a submission/completion queue pair is not limited to addressing one andonly one Namespace; see the NVMe specification for details.).



The ability to partition a physical storage extent into multiple logical storage extents andthen to create multiple IO channels to each extent is a feature of NVMe that wasarchitected and designed to allow the system in which it is used to exploit theparallelism available in upper layers of today’s platforms and extend that parallelism allthe way down into the storage device itself.


Multiple IO channels that can be dedicated to cores, processes or threads eliminate theneed for locks, or other semaphore based locking mechanisms around an IO channel.This ensures that IO channel resource contention, a major performance killer in IOsubsystems, is not an issue.



此帖出自FPGA/CPLD论坛
个人签名training
 
 
 

回复

6423

帖子

17

TA的资源

版主

4
 
Layering Aspects of Both

Both AHCI and NVMe are interfaces that rely on PCI/PCIe to provide the underlyingphysical interfaces and transports. An AHCI HBA will plug into a PCI/PCIe bus. A PCIeSSD implementing the NVMe interface will plug into a PCIe bus. Either interface couldbe enhanced to work with devices that plug into a systems interconnect of a differenttype but this has not been done to date as PCI/PCIe is the dominant systemsinterconnect and a need for additional physical interconnect support is not there.


That said, AHCI is commonly implemented within the processor chipset. The processorside physical bussing is implementation dependent and may or may not be PCI/PCIe.When the AHCI controller is implemented within the processor or processor supportchipset and connected via the PCI/PCIe bus it is a referred to as a Root IntegratedEndpoint. The bus connecting the processor to the AHCI adapter within a processor orprocessor support chipset may be proprietary. Those cases are not germane to thisdiscussion and any further discussion on this topic is beyond the scope of this paper.The remainder of this paper will assume that the host side physical interconnect isPCI/PCIe.


此帖出自FPGA/CPLD论坛
个人签名training
 
 
 

回复
您需要登录后才可以回帖 登录 | 注册

随便看看
查找数据手册?

EEWorld Datasheet 技术支持

相关文章 更多>>
关闭
站长推荐上一条 1/7 下一条

 
EEWorld订阅号

 
EEWorld服务号

 
汽车开发圈

About Us 关于我们 客户服务 联系方式 器件索引 网站地图 最新更新 手机版

站点相关: 国产芯 安防电子 汽车电子 手机便携 工业控制 家用电子 医疗电子 测试测量 网络通信 物联网

北京市海淀区中关村大街18号B座15层1530室 电话:(010)82350740 邮编:100190

电子工程世界版权所有 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2025 EEWORLD.com.cn, Inc. All rights reserved
快速回复 返回顶部 返回列表