2007年7月23日 星期一

Linux的異地備份 (1)

http://indeepnight.blogspot.com/2007/06/linuxoffsite-backup1.html

Indeepnight  2007/07/18

Linux據說不太適合用ghost進行備份資料,所以我就開始尋找其它的替代方案。

原本有考慮使用cpio或是軟體raid的方式來進行備援,但有不適合即時復原和成本太高的問題,所以改用了rsync的備援方式。

它的好處就是能夠最資料的比對,如果是異地備份的話,還是以ssh的加密法來傳送資料(在區網之內效益較大),如果是internet的備份,如果是利用有限頻寬的ADSL來傳送資料,就不太適合了!

筆者是以Fedora6做為系統,在安裝的過程中,rsync也已經安裝完成,皆以xinetd這個Super Daemon控制,相關的內容有興趣者可以上Google查一下!

為了資料能夠即時備援上線,筆者使用了兩台主機,雖然都是一般的pc,並沒有什麼較為特別的主機(但是因為Linux的硬體需求較低)。

所以,我用了一台p4-2.4g、512mb的ddrram(DDR400)當主要系統,p3-850MHz、384mb的sdram(pc133)為異地的備援機(為了備援主機,筆者還碰上了一些硬體相關的問題,之後在與大家討論)。

不過這也告訴我們,大家舊的主機可不一定就沒有派上用場,因為如果一個類似NAS功能的主機,動輒就要五、六位數的花費,雖然專業的主機有較好的效能,但是用舊機器一樣也能練練功力。

至於rsync的指令用法有相當多,有興趣的人可以參考官方網站的說明文件(英文的說明文件,請笑納)。

筆者在這裡也介紹常用的一些參數:

-a, --archive archive mode; same as -rlptgoD(維持相關資料,hard-links除外)
-v, --verbose increase verbosity(在傳送期間,增加傳送的資訊量)
-R, --relative use relative path names(保留相對路徑)
--delete, delete extraneous files from dest dirs(Server端刪除資料,Client也同步刪除)

而在Fedora6中的rsync已經預設使用ssh的加密法傳送,在舊版還需加上-e ssh才能使用加密傳送,但是ssh做自動化的異地傳送還有個問題要解決。

那就是就算在Crontab中加上了一行自動排程執行的指令:

0 0 * * * rsync -avR --delete /home root@192.168.1.99:/home/backup

(凌晨0點0分進行rsync,把/home的資料傳去192.168.1.99的/home/backup備份)

但是ssh做登入時,會遇到一個棘手的問題,那就是輸入密碼這件事!

筆者下一篇會解說如何解決自動登入的問題,那就是利用公鑰(Pub-Key)登入。

清單 - MIS

編輯

註: 軟體部份,轉看「code, software」標簽

◆ Enterprise Side

 ‧2010/09/09 利用iSNS簡化區網iSCSI管理作業(連結)
 ‧2010/07/12 LTO-5為磁帶儲存應用注入新活力
 ‧2010/06/22 HP 新IT架構,簡化管理及大幅減少IT 支出
 ‧2010/06/14 Windows XP 轉移至虛擬系統之授權限制
 ‧2010/06/14 IP儲存網路邁向10GiB
 ‧2007/07/23 Linux的異地備份 (2)
 ‧2007/07/18 Linux的異地備份 (1)
 ‧2007/07/17 電子交易(B2B)標準RosettaNet改採收費制
 ‧2007/07/05 ITIL/ITSM(類似CMMI),企業開源節流得先投資數百萬



◆ User Side
 .2010/09/08 把Windows 7的開始選單變成強化版的XP樣式(連結)
 ‧2010/06/19 在檔案總管中 批次更改名稱
 ‧2007/07/24 另類百大搜尋引擎




 *備 份 Blogger 的妙法:Blogger Backup(連結)

ITIL/ITSM,企業開源節流得先投資數百萬

繼台灣IBM及環保署宣布通過ISO/IEC 20000驗證稽核後,宏碁eDC亦於日前宣布通過ISO/IEC 20000驗證稽核;除此,淡江大學資訊中心主任黃明達及日月光集團副總經理盛敏成也曾在出席「2007台灣ITIL/ITSM趨勢研討會暨服務應用展示 會」中表示,有進行ISO/IEC 20000驗證稽核的興趣……。

從上述案例來看,可發現,最佳化企業資訊服務管理議題,確實是愈來愈 受到產官學關注。不過,要做到最佳化企業資訊服務管理卻不容易,因為,自企業決定要依循ITIL/ITSM調整資訊系統架構起,企業就得花下大筆的金錢及 資源,先瞭解何謂資訊科技基礎架構庫(IT Infrastructure Library;ITIL)/資訊服務管理(IT Services Management;ITSM)、評估企業該如何做、選購/開發相對應的產品、部署產品,及系統上線後的支援維護。

舉 例來說,宏碁因2001年發生的資安事件,而開始關注到ITIL/ITSM這個議題,其後,不僅有專屬的團隊在研究ITIL/ITSM,也花錢讓種子團隊 去上ITIL系列教育訓練/認證課程,然後開始選購、部署及維運相對應的系統產品,然後,為了進行ISO/IEC 20000驗證稽核,也顧了顧問作諮詢稽核,宏碁eDC副總經理張善政即曾指出,宏碁eDC花在參考ITIL/ITSM調整資訊系統架構的費用高達數千萬 元。

日月光集團副總經理盛敏成也曾表示,日月光自2004年即開始關注ITIL/ITSM這個議題,雖然只花了300萬元上下的費用添購 CA的Unicenter,不過,這只是採購該產品的費用而已,日月光亦投資了不少的人力資源作事前規劃、系統部署及維護。日月光雖有意願進行 ISO/IEC 20000驗證稽核,但因目前沒有多餘的資金及資源,因此,短期內沒有進行ISO/IEC 20000驗證稽核的動作。

再 看另1個案例—環保署,環保署雖然在短短8個月內即通過ISO/IEC 20000驗證稽核,不過,為此所花費的金額可不低,亦在千萬以上,且行政院環保署環境監測及資訊處處長蕭慧娟亦明白指出,通過ISO/IEC 20000代表的是,環保署得持續的維運及改善,因為要達到ITIL/ITSM所謂的最佳實務,必須持續投入一定的資金及資源。

既然要如 此砸下大把銀子,為什麼最佳化資訊服務管理議題還這麼受市場關注呢?理由也很簡單:因為資訊服務管理機制,確實是影響企業運籌帷幄的關鍵,想想,企業員工 有可能1整天都不接觸、使用文書辦公系統等軟硬體資訊系統嗎?且隨著企業規模漸增,企業資訊系統架構趨於紛雜,在這樣的狀況下,該如何確保資訊系統能確實 無礙的支援各項業務營運,這正是最佳化資訊服務管理,會備受矚目的原因。

企業開源節流? 先支付1筆學費再說

由 於ITIL/ITSM所含括的範疇極廣,每個企業的資訊服務管理機制又會隨其業務需求及資訊系統架構而有所不同;因此,光是要參考ITIL/ITSM,並 評估及規劃出1套專屬的資訊服務管理機制,就得耗費一定的資金及資源,甚至可能需要花錢聘請顧問來做規劃諮詢;之後,還得再花錢做教育訓練等。若只是走到 這個階段,還不會花企業太多錢,百萬元即可解決。

不過,一旦當企業決定要實做時,那就至少得花上千萬元的經費,且這還不包括後續的維護支援,這從環保署、日月光及宏碁eDC等實作案例,可以得知。



換 句話說,要參考ITIL/ITSM規劃專屬企業的資訊服務管理機制,得先支出數百萬到千萬元的費用於咨詢服務、教育訓練及系統工具上,這對一般的中小型企 業來說,實在是一筆天價。因此,若連甫於日前成立的itSMF台灣分會,都沒有辦法針對中小企業提出相對應的推廣方案,只是瞄準中大企業,那 ITIL/ITSM恐怕又將成為1個只是系統供應商呼喊的口號,及中大型企業才有能力實踐的「貴族運動」。

且就算企業有這個資金及資源要參考ITIL/ITSM,重整資訊系統架構也很不容易,因為,ITIL/ITSM的核心乃在於,企業應制訂出1套最佳資訊服務管理機制,這就會讓企業碰上以下幾個待釐清的問題:

*最佳化可能沒有正確的答案?試想,高階主管、資訊部員工與其他部門的員工所認同的資訊服務品質會一致嗎?
*最佳化意味著永無止盡的變革管理?試問,有多少企業能一本初衷的持續執行某1個策略或投資?
*最佳化意味著投資?請問,有多少企業能接受「最佳化的資訊服務管理機制有助於提升業務績效,不過,得先花錢做好資訊服務管理機制」這種觀念?
*最佳化意味著心態/認知/行為的變革?試想,有那個企業員工會樂於接受變革,對那些專門在提供資訊服務的資訊部員工來說,更是如此!

ITIL/ITSM確有其功效 但重點仍在於企業是否真有導入決心

事實上,企業應先釐清上述4個問題,再決定要怎麼做;由於參考ITIL/ITSM重整資訊系統架構,意味著流程的變革重整,因此,高階主管有無導入決心,並給予實際的支援,十分重要。

盛敏成即曾表示,日月光認為「IT is Business and Business is IT」,因此,資訊服務品質一直是日月光十分關注的議題,事實上,高階主管也很願意花錢投資資訊系統,不過相對的必須要看到成效。

為讓日月光的高階主管看得到投資效益,日月光除在一開始即劃定好「Quick Win」專案外,亦投入不少的人力資源於其中,主導該專案的盛敏成表示,為最大化投資報酬率,日月光內部有系列的稽核考察方式,例如於早報或會議中詢問各部門負責人目前的進度等。

除了日月光外,早已通過ISO/IEC 20000驗證稽核的台灣IBM及環保署,也都表示,高階主管的支持尤甚重要,因為,重整資訊服務機制,是1項極耗費資金及資源的工作,若沒有高階主觀一起參與、支持,甚難居中協調出最適的解決方案。

要做好最佳化資訊服務管理機制,真的很不容易,不只要花資金及資源,若是最關鍵的高階主管撒手不管或轉調職位,那麼,能不能繼續改善資訊服務管理機制,都是個未知數。

[綜所稅] 無償借屋營業須報租賃收入

‧無償借屋營業 須報租賃收入

中寮鄉李先生問:將房屋無償借給姊姊開小吃店,是否須要申報租賃收入?

中區國稅局南投縣分局答覆:依所得稅法規定,個人將財產借與本人、配偶及直系親屬以外個人使用,除經查明確係無償且非供營業或執行業務者使用外,應參照當 地一般租金情況,計算租賃收入,繳納所得稅。將財產無償借與他人使用者,雙方當事人應訂定無償借用契約,並經雙方當事人以外之二人證明確實無償借用,並經 法院公證。

個人如確實無條件將房屋借與本人、配偶及直系親屬以外之個人做為居家使用,提示之無償借用契約一定要符合上述規定,才可免計入租賃收入。

【2007/07/23 經濟日報】

電腦導航 人工膝關節置換更準

‧電腦導航 人工膝關節置換更準

黃天如/台北報導 中國時報 2007.06.21

一名年僅廿四歲的類風濕性關節炎女患者,發病一年半後就舉步維艱,只能仰賴輪椅代步;後來她在高雄長庚醫院接受了「電腦導航儀」人工膝關節置換手術,不用輸血,且術後隔天就能拄著柺杖下床。

高雄長庚骨科主任郭繼陽表示,很多老人與運動員都需要接受人工膝關節置換,但又擔心人工關節的使用年限,因而望之卻步。為提高人工膝關節置換的精準度及使用年限,國內大型醫院近年相繼引進「電腦導航儀」,效果相當不錯,且費用由健保給付。

傳統的人工膝關節置換手術定位,需要利用一根長約廿公分的管子穿過患者大腿股骨一半的位置,精準度約可達三度正負差的程度。

最新的電腦導航儀則只須在患者的膝蓋上打兩個直徑四毫米的小洞,用來置放導航發射球,醫師則在電腦導航定位的引領下,更精準地為患者置換人工關節,臨床證實精準度可達一度甚至零點五度正負差。

郭繼陽主任說,除了精準度更高之外,相較傳統手術方式,電腦導航儀「微創」的特性也很值得推荐,可以大幅減少手術過程中患者骨髓腔被破壞的機率;且不用輸血就可完成手術。

除了人工膝關節置換,電腦導航儀也能運用在骨科常施行的脊椎、人工髖關節置換、骨折、韌帶重建等手術上。

據指出,電腦導航儀最早是由德國製造,目前全世界有很多醫療儀器大廠都有生產,每台售價約兩千多萬元台幣。國內則於五年前由嘉義長庚副院長許文蔚、成大醫學院教授楊俊佑率先引進。

Linux 核心 2.6.23 改進及Driver API

Linux 核心 2.6.23 的改進

在即將推出的 Linux kernel 2.6.23 當中,除了之前提到的 Linux 核心可能會加入 Completely Fair Scheduler(kerneltrap.org/node/8059)之外。Linus Torvalds 將 patches 包含至 mainline tree 當中,這代表會在Linux kernel 中實作一個穩定的、userspace 的 driver API。
liquidat.wordpress.com/2007/07/21/linux-kernel-2623-to-have-stable-userspace-driver-api/

這個穩定的 driver API 早在一年之前就由 Greg Kroah-Hartman 宣佈?現在,最新的 patches 已經加入 Linus 的 tree 當中。這個點子會讓驅動程式開發者的日子好過一點。


Linux kernel 2.6.23 to have stable userspace driver API

Tux
Linus Torvalds included patches into the mainline tree which implement a stable userspace driver API into the Linux kernel.

The stable driver API was already announced a year ago by Greg Kroah-Hartman. Now the last patches where uploaded and the API was included in Linus’ tree. The idea of the API is to make life easier for driver developers:

This interface allows the ability to write the majority of a driver in userspace with only a very small shell of a driver in the kernel itself. It uses a char device and sysfs to interact with a userspace process to process interrupts and control memory accesses.

Since future drivers using this API will run mainly in userspace there is no need to open up the source code for these parts. Also, drivers can be re-used even after kernel changes because the API will remain stable.

The background motivation for the inclusion of such a stable API comes form the embedded world: there embedded drivers are often closed source and just developed for a single kernel version and are not maintained over time. Using the new API these drivers could be used for a longer time.
And in fact such an API has already been developed by people from the embedded industry to make their life easier. But it was just developed for a single device and not as a generic interface. Such developments are now unnecessary.

However, DMA transfer between userspace and kernelspace is not yet implemented. This means essentially that drivers which involve high traffic are not an option yet. So graphic drivers as well as file system drivers and similar cannot use this API at the moment.

Some people might ask now why not simply allow closed source kernel drivers directly in the kernel, since they are allowed in userspace now anyway. But there is a huge difference between these two types of drivers - while userspace drivers can be controlled in a certain way and cannot trash the kernel kernelspace drivers could do that. A quite good explanation about the topic can be read at LWN.
The other way around one could ask if this will mean the end for Linux (for example according to the LWN article). But again, kernelspace and userspace is very different - an d with the new API the important part (the kernelaspace part) still has to be Open Source! Besides, this API does not introduce anything which was not possible before: as already mentioned such an API was already developed quite some time ago, and was actively used.

But I must also admit that while I see the need for such an API I still prefer hardware vendors who simply make the drivers Open Source and bring them upstream.

Thanks to German heise.de for a detailed article about the topic.

Posted in Linux.

21 Responses to “Linux kernel 2.6.23 to have stable userspace driver API”

  1. La bitacora de laparca » Blog Archive » Controladores en espacio de usuario: Por fin Says:

    […] Estaba esta mañana leyendo las noticias en OS News y me he encontrado con la grata noticia de que por fin se podrán incluir controladores en espacio de usuario en el núcleo Linux. […]

  2. diego Says:

    Nice!

    I hope for a Linux micro-kernel in the future, with all this cool features!

  3. gissi Says:

    I think this stable API is a huge chance for Linux. Of course it would be much better if the hardware vendors published free Linux drivers. However, in my opinion, it is better to increase the Linux market share by making it easier to provide proprietary drivers instead of inhibiting people to use Linux by not supporting their hardware. Maybe, in the future Linux will be strong enough to force hardware vendors to publish Open Source drivers, but this point is IMO not yet reached, so we should - as I already said - accept unfree drivers at the expense of not (yet) having a completely free system.

  4. nate Says:

    “”it is better to increase the Linux market share by making it easier to provide proprietary drivers instead of inhibiting people to use Linux by not supporting their hardware.”"

    Besides the ‘moral’ ‘Free software’ gnu-style arguements against proprietary software, there are serious technical and practical issues against proprietary drivers.

    Mainly they revolve around the fact that if there is something wrong with them, nobody can fix them. Nobody knows how they work, nobody knows how the hardware works, so if the driver breaks for you nobody from Ubuntu or Redhat or Kernel.org will be able to help you. Unless you can get a hold of the original (most of time nameless) developer working at broadcom or ATI then your up shit’s creek without a paddle.

    Ever try to get a hold of a developer from ATI to fix a driver issue that only affects you and your paticular hardware/software combination? Either in Windows or Linux? Or nvidia? Or Texas Instruments? Ya, email their customer service and just see how little they care about you. Your not worth their time. Unless it affects a significant number of users they aren’t going to fix it. It’s not worth their time.

    In comparision with open source drivers I’ve gotten attention from the people that wrote them. I filed bugs, they fixed them. Every day people working on Ubuntu or Debian submit patches and different fixes to various drivers.

    This is why Redhat refuses to support hardware configurations that require proprietary drivers. This is why Dell will refuse to ship server hardware that uses proprietary drivers. They know and understand this in the server room.. Linux with open source drivers is significantly cheaper for both the end user and the OEM to support then Linux with closed source drivers.

    The Linux developers do require the assistance of the hardware manufactures to create full featured drivers in a timely fasion. If not then it requires reverse engineering and the drivers are going to lag behind.. like Open source 3d drivers for Nvidia and ATI. So it creates the illusion that Linux devs aren’t able to keep up with and create quality drivers sometimes.. But this is, indeed, a illusion.

    For both the end user, the hardware manufacturer, and the people that need to do commercial support Linux with OSS drivers has a significant advantage in stability, ease of use, and often performance. This advantage is one of things that will make Linux much more appealing for end users versus Windows.

    All of this is purely technical, it still doesn’t take into effect any sort of moral, ethical, or legal reasons for not using proprietary drivers.

    So saying that Linux needs to support proprietary drivers for success is a red herring. If end users have no choice, then yes, support from proprietary drivers is better then no support at all. But this is hardly a good situation.

    The ultimate solution, the real solution, is that if you want to run Linux you should have hardware that is well supported by OSS drivers. This is real setup that makes Linux appealing.

    Right now the only time you NEED to have proprietary drivers for Linux desktop is high-end 3d acceleration. That’s it.

    For Wifi, for ACPI, motherboards, NIC cards, RAID drivers, etc etc etc. For almost any sort of hardware you want to use there exists good and high quality hardware with OSS support. But for 3D the only choices for high-end graphics are ATI and Nvidai, and they both refuse to support Linux with OSS drivers.
    (for regular 3D graphics, for beryl/compiz and simple games then onboard Intel will work acceptively and it has OSS drivers)

    Having userspace API isn’t going to help that any.

    Realy, this API is designed for embedded developers anyways. It has very little to do with proprietary drivers. In fact it has not much to do with it at all. There is certain times with embedded development were userspace drivers is require for stability, control, and certain performance reasons. It does not matter at all if the source code is closed or open, there are technical reasons why you want userspace.

    If you don’t beleive me.. then understand this:

    In Linux, video drivers are ALREADY USERSPACE DRIVERS. Right now Video cards, for both 2D and 3D performance in X Windows are userspace drivers. They’ve always have been, and probably always will be.

    Suprised?

    Using the DRI/X/Linux driver model it works like this… I’ll use Intel drivers for a example.

    Then xf86-video-intel video drivers is the 2-D X driver. It’s file is i915_drv.so and is provided by your X.org release. It’s completely userspace and newer versions support both XAA and EXA acceleration models.

    The i915_dri.so file provides for 3D acceleration. They call it the ‘DRI driver’. DRI drivers are based on the open source Mesa OpenGL stack.. they take Mesa, accelerate what they can on the video card, and do the rest with software rendering. (OpenGL is a huge API, no video card accelerates all of it, only the portions that matter to performance in games and applications)

    Now, having closed source drivers for those poses no legal problem at all. X.org and Mesa licenses allow for their use in proprietary software.

    The only portion of the driver that needs to be in the kernel is called the DRM driver. This driver is what allows those userspace drivers to access the hardware and get their acceleration. It’s kept to a more-or-less minimum.

    So the Linux driver model for video cards do allow for userspace drivers.

    And guess what?

    ATI’s driver still sucks and still is hugely complicated to install. It’s a total crap shoot if it works or not… And Nvidia still refuses to fix bugs and problems, some of them that cause crashes and security issues, because these problems are only noticed by a minority of users. And in both cases those companies still shovel large amounts of buggy, closed source code into your kernel were it can cause all sorts of problems from kernel panics to FS corruption (if your very unlucky).

    So don’t think that userspace drivers are going to solve anything….

  5. Takis Says:

    > In Linux, video drivers are ALREADY USERSPACE DRIVERS.
    > Right now Video cards, for both 2D and 3D performance in X > Windows are userspace drivers. They’ve always have been,
    > and probably always will be.
    >
    > Suprised?
    >
    > Using the DRI/X/Linux driver model it works like this… I’ll use
    > Intel drivers for a example.

    The NVIDIA drivers are not userspace drivers, part of them runs in userspace, but a _huge_ part runs in the kernel. Certainly _not_ only a DRI driver.

    And I vaguely remember having to load a fglrx.ko module for some friends of mine using ATI cards, so I assume it’s the same for ATI.

  6. nate Says:

    Yes, exactly my point.

    The XGI company makes a proprietary driver for Linux. They have open source DRM driver for the kernel, open source 2-D driver, and a closed source DRI driver (due to code obtained from another company).

    Nvidia and ATI have always had the option of doing userspace video drivers, but they don’t. Do you know why? I don’t. There isn’t any sort of performance advantage to having kernel code for these sort of hardware.

    They do it because they feel like it, I guess. It’s certainly much more likely to cause a crash and corrupt data the way they are doing it now. This is why Microsoft has moved from in-kernel video drivers to userspace video drivers for Vista.

    The open source video drivers for X Windows have always been mostly userspace. In Linux, lots of USB drivers are userspace. Also, via fuse, you have file systems that are userspace.

    Having drivers in userspace, with a fairly stable API, is nothing realy remarkable. _This_ paticular patch, for making generic API for embedded developers, is rather new, but userspace driver concept for Linux isn’t. For drivers that are in the kernel, there is a very good reason for it.

    Once the hardware manufacturers and open source driver developers get together and work together then that produces the highest quality driver possible. Better then Windows or any sort of proprietary driver for Linux, generally. This is what is most desirable and is what everybody needs to aim for.

    For selecting hardware, unless your realy a Linux geek, the best thing you can do is simply purchase a laptop or desktop from some place like System76 or Dell that already has all of it’s hardware supported by OSS drivers and pre-installs Linux. The only thing that remains closed source, is those nvidia drivers, because for high performance 3d Nvidia is the only game in town. (for normal 3d desktop and such Intel onboard works fine and is supported by OSS) That and the modem driver in Dell’s laptops is closed source for whatever reason, but most people don’t realy care about that.

    This way it will ‘Just Work’. Just like when you go to a Apple store and buy a Apple computer all you have to do is open the box and plug it in.

  7. Ta bu shi da yu Says:

    It’s funny that Microsoft have moved the video driver from the kernel space to the userspace, given that it was a design decision to do this early on in the days of Windows 2000.

  8. liquidat Says:

    nate, about the proprietary video drivers:
    For them building upon Mesa was not an option at all. Mesa was outdated and several years behind development for quite some time therefore relying on Mesa would have been crazy.

    So they used their own OpenGL implementation - and this was closed.

  9. nate Says:

    “”It’s funny that Microsoft have moved the video driver from the kernel space to the userspace, given that it was a design decision to do this early on in the days of Windows 2000.”"

    For some drivers, yes. For video drivers, no.
    Video drivers as of XP were in-kernel. In fact it’s pre-Windows 2000. The desire for userspace drivers comes from earlier versions of NT. In fact very early of versions NT was a real Microkernel, but Microsoft dropped that when they realised that microkernels and high performance are two things that will never meet.

    For Vista WDDM is the new driver model.

    “”nate, about the proprietary video drivers:
    For them building upon Mesa was not an option at all. Mesa was outdated and several years behind development for quite some time therefore relying on Mesa would have been crazy.

    So they used their own OpenGL implementation - and this was closed.”"

    That’s completely irrelevent to anything I said.

    My point is that for video drivers in Linux, which (for modern high-end 3d) is the only major class of hardware were you can’t find good OSS drivers for, it’s always been possible for Nvidia and ATI to do the vast majority of the driver in userspace. Which they do not; they put a lot of code in the kernel. Whose OpenGL stack they choose to use is pretty much irrelevent to this.

    Having this API for userspace drivers isn’t going to accomplish anything in terms of making Linux easier for end users, besides a small amount of embedded developers. For them there is certain classes of hardware and tasks were userspace drivers are technically desirable. Licensing isn’t realy a issue. If licensing or ’stable api’ or anything like that was the issue, then this patch wouldn’t exist.

  10. nate Says:

    Sorry I don’t want to be a hard-ass or anything. I understand what your saying and such. I want to be friends. :-)

    I’ll try another way to say it…

    Userspace drivers aren’t going to make Linux any easier for folks because..

    * Closed source drivers are, generally, technically inferior to OSS drivers with all else being equal. For Linux to be the most successfull it can be requires openess on the part of the drivers and hardware manufacturers supporting Linux.

    * For graphical drivers, it’s always been possible to do most of it in userspace. (closed vs open doesn’t mater for this). XGI has closed source 3d graphics driver for Linux and it’s userspace. Nvidia and ATI could do much more of their drivers in userspace, but they do not. This patch isn’t going to affect them one way or another.

    * For this patch in paticular it’s designed for specific circumstances that don’t apply much to desktop anyways.

    * For the most part OSS drivers, weither in or out of the kernel, exist for almost all classes of hardware besides high end 3D graphics. Weither wifi, sata, sound, or anything else you can always find good and modern hardware with decent OSS drivers.

    They are all seperate points. I am sorry for not being clearer with them before, but communication through writing is not my strong point. I’ll leave this blog alone now. :)

    And on a side note; for the most part the simple act of installing a OS, any OS, on random hardware is a singificant enough a barrier to prevent Linux adoption on the desktop. The real answer is to have Linux pre-installed and supported on the OEM level, then it will have much more of a chance for higher popularity.

  11. nate Says:

    Ok, I lied. Almost.

    This post on LKML will help to illistrate the point behind this paticular patch:
    http://lkml.org/lkml/2007/7/19/557

    So far it’s only for industrial I/O cards. Very simple devices, often only a few made, just flipping switches. Nothing like your typical consumer device that is hugely complicated and requires lots of other things in the kernel to work properly.

  12. Links on this sunny sunday Says:

    […] of course - from what i understand (under Linux) device driver modules slot into the kernel. Well, no longer. The connection between both is being partially […]

  13. liquidat Says:

    nate: don’t worry, I don’t take such things personal. I’m just a bit surprised because it looks like you read my post like I would expect things massively to change now.

    But I don’t. I’m very well aware that this API is mainly for embedded. And that userspace drivers have been possible before.

    So thanks for the detailed comments. :)

  14. Ken Bloom Says:

    nate,

    Having a small well-defined interface for userspace drivers will make it very easy to snoop on the driver’s interaction with kernel space and easier to reverse-engineer in the long run.

  15. API estable en espacio de usuario para los drivers en el kernel 2.6.23 // menéame Says:

    […] que no podrá usarse para drivers como los de las tarjetas gráficas. Noticia original en inglés: liquidat.wordpress.com/2007/07/21/linux-kernel-2623-to-have-stable-useietiquetas: linux, api, drivers, kernel sin comentariosen: tecnología, software libre negativos: 0 […]

  16. This too was Dugg by … Says:

    […] read more | digg story 尼古拉 @ 2:17 pm [filed under Digg […]

  17. Chris Says:

    @nate:

    I can’t believe you wrote an entire essay to reply to a comment. You have too much time on your hands.

  18. Mauro Andres Says:

    Listen and learn …
    The pros –technical and in terms of reliability– of implementing drivers in user space are covered in http://en.sevenload.com/videos/DqzIRi2/Andrew-Tanenbaum-Design-of-microkernel-OS

    As whether they should be binary, of course not! … for various reasons including for reliability –for those of you that are pragmatists– and security –for those of that are paranoid. Not that this thinning out of linux will not help facilitating the easy development of binary drivers and that this might help consolidate, promote, and gain corporate popularity for linux.

  19. Top Posts « WordPress.com Says:

    […] Linux kernel 2.6.23 to have stable userspace driver API [image] Linus Torvalds included patches into the mainline tree which implement a stable userspace driver API into the […] […]

  20. /home/liquidat Linux kernel 2.6.23 to have stable userspace driver API « « Hell’s Kitchen Says:

    […] /home/liquidat Linux kernel 2.6.23 to have stable userspace driver API « 23 07 2007 /home/liquidat Linux kernel 2.6.23 to have stable userspace driver API « […]

  21. A prevalência do Linux « asf@web Says:

    […] na minha opinião mais um importante passo foi dado em direção a esse […]

NICTA 開發 L4/Iguana 微核心內嵌式OS 技術

NICTA Develops Secure Embedded Operating Systems Technology

http://www.linuxelectrons.com/article.php/20051124135314412

Thursday, November 24 2005 @ 01:53 PM CST
Contributed by: ByteEnable

澳洲 -- National ICT Australia (NICTA),澳洲的國家資通技術研究中心已經研發了一個先進開放原碼作業系統,將能夠增加嵌入式系統的安全性、可靠性以及信賴度。

"我們的 L4/Iguana OS 有潛力可以徹底改革目前全世界所使用的嵌入式系統," Gernot Heiser 教授表示,他是 NICTA 的 Embedded, Real-Time, and Operating Systems Program (ERTOS) 領導者。"它目前被澳洲的中小企業以及跨國公司部署評鑑當中。"

第一個要被商業化部署由 NICTA 開發的技術是 L4/Iguana,它將解決因軟體複雜度、網路連結以及行動編碼所導致漸增的問題。

"L4/Iguana 是一個小型 OS 特別為了安全與保障而研發的。它將軟體的數量最小化,那是要被信任來保護敏感資料或是具有價值的 IP,而且在嵌入式系統上,不同的軟體元件之間提供強而有力的隔離(isolation),並且能夠提供保護,避免掉行為不軌或是具有惡意的非信賴元件," Heiser 表示。

"L4/Iguana 是通用嵌入式 OS 框架的一部份,由澳洲雪梨 Kensington Neville Roach 研究實驗式 NICTA 的 ERTOS 研究計畫所開發。"我們聚焦使用微核心技術來支援軟體工程技術以及正式方式的的應用,來嵌入軟體," 他說。

這軟體是基於之前 NICTA 與德國 Karlsruhe 大學、以及澳洲雪梨新南威爾斯大學所共同開發的 L4 微核心。

ERTOS embedded OS 的獨特框架使用硬體保護機制來將複雜軟體封裝到受保護的元件當中,避免系統因這些缺失受害。在正式的 embedded OS framework 當中包含:

  • NICTA::Pistachio(開心果)-嵌入,第一個符合 L4-embedded API 的核 心。NICTA::Pistachio 是基於 L4Ka::Pistachio。
  • Iguana(鬣蜥蜴),一個基於 L4 的 OS,為嵌入系統特別研發的。
  • Kenge,一組套件,用來建立基於微核心的系統。
  • Wombat(袋熊),一個沒有特權的(de-privileged,意即 para- virtualised,並行虛擬化的)Linux server,在 L4/Iguana 上執行。

L4/Iguana 支援眾多處理器架構,對於嵌入式世界而言格外重要,包含 ARM, x86, 與 MIPS。在 ARM 上,它是最快的同時具有記憶體保護的 OS。並且是首個能夠提供虛擬機器來執行 Linux 的系統。

L4 微核心能夠用於 QUALCOMM(美商高通)晶片組方案(Mobile Station Modem (MSM) chipsets)。

※ 據說 Wombat 的效能比原生的 Linux 還要快...
http://ertos.nicta.com.au/research/l4/performance.pml


◆ NICTA 將 L4 開放原碼

NICTA mobilises open source with L4
http://www.arnnet.com.au/index.php/id;710118083;fp;256;fpid;56736

Dahna McConnachie, LinuxWorld
30/05/2006 17:14:11

由政府資助的研究組織 National ICT Australia (NICTA) 在週三於雪梨由它們舉辦的年度科技展當中,證明他們開放原碼 Iguana(鬣蜥蜴,http://ertos.nicta.com.au/software/kenge/iguana-project/latest/)的自有版本與 L4
(http://ertos.nicta.com.au/research/l4/)微核心在嵌入式行動平台當中是如何運作。去年底 Qualcomm(美商高通)宣佈它將一併使用 NICTA 的 L4 微核心版本以及 Iguana 作業系統,以及其 Mobile Station Modem 晶片組的選擇版本。

NICTA Embedded, Real-Time, and Operating Systems (ERTOS) 的領導者Gernot Heiser 教授表示,他的 L4 微核心解決方案已經準備商業化了。他已經與其他各大行動晶片供應商進行討論。

Heiser 表示大部分的行動晶片組都使用龐大的程式碼,也容易受到臭蟲的影響。

"The functionality provided by modern mobile phones requires a huge amount of software to implement, and this implies that the software contains a large number of faults, literally tens of
thousands of bugs," he said.

"Those faults not only threaten the reliability of the device, but have the potential to introduce security problems that threaten to compromise the privacy of data held on the device. It
also makes devices susceptible to attacks by viruses and worms."

L4/Iguana 系統使用硬體提供的記憶體保護機制來提供系統元件之間的隔離(isolation),這讓製造商能遏止缺失的影響,並因此限制它們可能造成的危害。

根據 Heiser,L4 是現存當中最小的核心之一,它可以提供這樣的隔離。

Heiser 的團隊也以提供一個數學上的、機器檢核的 L4 微核心機能驗證的證明來運作,這讓它能夠在以安全及保障為至高要求的系統當中被使用,例如:車,飛機、醫療或是軍事裝置。

這裡約有 10 為全職員工與 Heiser 一起工作,而他的團隊是一個更大型研究計畫的一部份,有超過 40 位的職員與博士生,超過半數以上對於研發作業系統的核心都很在行。

Heiser 表示他的團隊使用開放原碼來將影響力以及理解力最大化。

"市場上有許多嵌入式作業系統,為了讓人們能夠感興趣,你需要比更好的技術還要再更多。此外,這裡有遠離即時作業系統授權費的趨勢。我看見這是一個全面的趨勢,朝向核心軟體基礎建設的普及化(commoditization)發展," 他說。

為了這個緣故,他說,包含開放原碼普及化來減少每個人的成本的方式。

"為何要浪費資源在別人已經做過的相同事件上,而不要做更多改變以從中獲得商業化的優勢呢?在此同時你還可以透過貢獻某些東西由大家共同分享來消除絕大部份的成本," 他問道。

"如果你跟行銷人員談,他會告訴你各種東西,到時候你就會發現產品並沒有自天花亂墜宣傳當中出現,你可能會投資過多以致於無法退出。對我們而言,消費者甚至能夠(也可以)在跟我們洽談之前先自行檢查產品。"

Heiser 的普及化策略是基於服務,首先是將客戶的平台移植到 L4/Iguana 的顧問形態工作,以及鑑別與修理效能問題。他的團隊也提供任何必須的訓練。

"在短期來說,最主要的目標是讓我們的技術被理解能夠最大化。基於目前的經驗,我們真的有機會能夠在某些產業層次中成為業界標準," 他說。

"我們也為中期未來運作某些模式這包含服務的訂閱模式(如同 Red Hat 所提供的),或是來自於雙授權模式的授權收入,TrollTech 與最近被甲骨文收購的 SleepyCat 都是成功範例。數學上的正確證明將能建立足夠的附加價值,讓某些客戶認為付授權費相當值得。"

Heiser 的團隊用來實作 OS 的基本語言是 C,C 某些部份是 C++ 當中受到限制的子集,在效能關鍵之處則使用了大量的組合語言。

"我們把很多腳本當工具來用,大部分是 Python。我們也使用函數性(functional)的語言 Haskell 來快速開發原型以及當作特定的語言。它能夠大大簡化正式核實時的數學推理需求,即便如此,最終產品並不是用 Haskell 寫的," 他說。

"Haskell 的使用賦予我們一個子計畫(讓我們的核心 API 演化成適合某些高安全應用),這是與競爭計畫相較下更具有顯著的優勢。"

解析Linux 核心歷史沿革

Anatomy of the Linux kernel History and architectural decomposition

M. Tim Jones (mtj@mtjones.com), Consultant Engineer, Emulex Corp.

06 Jun 2007

The Linux® kernel is the core of a large and complex operating system, and while it's huge, it is well organized in terms of subsystems and layers. In this article, you explore the general structure of the Linux kernel and get to know its major subsystems and core interfaces. Where possible, you get links to other IBM articles to help you dig deeper.
Given that the goal of this article is to introduce you to the Linux kernel and explore its architecture and major components, let's start with a short tour of Linux kernel history, then look at the Linux kernel architecture from 30,000 feet, and, finally, examine its major subsystems. The Linux kernel is over six million lines of code, so this introduction is not exhaustive. Use the pointers to more content to dig in further.

A short tour of Linux history

Linux or GNU/Linux?
You've probably noticed that Linux as an operating system is referred to in some cases as "Linux" and in others as "GNU/Linux." The reason behind this is that Linux is the kernel of an operating system. The wide range of applications that make the operating system useful are the GNU software. For example, the windowing system, compiler, variety of shells, development tools, editors, utilities, and other applications exist outside of the kernel, many of which are GNU software. For this reason, many consider "GNU/Linux" a more appropriate name for the operating system, while "Linux" is appropriate when referring to just the kernel.

While Linux is arguably the most popular open source operating system, its history is actually quite short considering the timeline of operating systems. In the early days of computing, programmers developed on the bare hardware in the hardware's language. The lack of an operating system meant that only one application (and one user) could use the large and expensive device at a time. Early operating systems were developed in the 1950s to provide a simpler development experience. Examples include the General Motors Operating System (GMOS) developed for the IBM 701 and the FORTRAN Monitor System (FMS) developed by North American Aviation for the IBM 709.

In the 1960s, Massachusetts Institute of Technology (MIT) and a host of companies developed an experimental operating system called Multics (or Multiplexed Information and Computing Service) for the GE-645. One of the developers of this operating system, AT&T, dropped out of Multics and developed their own operating system in 1970 called Unics. Along with this operating system was the C language, for which C was developed and then rewritten to make operating system development portable.

Twenty years later, Andrew Tanenbaum created a microkernel version of UNIX®, called MINIX (for minimal UNIX), that ran on small personal computers. This open source operating system inspired Linus Torvalds' initial development of Linux in the early 1990s (see Figure 1).



Figure 1. Short history of major Linux kernel releases
Short history of major Linux kernel releases

Linux quickly evolved from a single-person project to a world-wide development project involving thousands of developers. One of the most important decisions for Linux was its adoption of the GNU General Public License (GPL). Under the GPL, the Linux kernel was protected from commercial exploitation, and it also benefited from the user-space development of the GNU project (of Richard Stallman, whose source dwarfs that of the Linux kernel). This allowed useful applications such as the GNU Compiler Collection (GCC) and various shell support.


Introduction to the Linux kernel

Now on to a high-altitude look at the GNU/Linux operating system architecture. You can think about an operating system from two levels, as shown in Figure 2.


Figure 2. The fundamental architecture of the GNU/Linux operating system
The fundamental architecture of the GNU/Linux operating system
Methods for system call interface (SCI)
In reality, the architecture is not as clean as what is shown in Figure 2. For example, the mechanism by which system calls are handled (transitioning from the user space to the kernel space) can differ by architecture. Newer x86 central processing units (CPUs) that provide support for virtualization instructions are more efficient in this process than older x86 processors that use the traditional int 80h method.

At the top is the user, or application, space. This is where the user applications are executed. Below the user space is the kernel space. Here, the Linux kernel exists.

There is also the GNU C Library (glibc). This provides the system call interface that connects to the kernel and provides the mechanism to transition between the user-space application and the kernel. This is important because the kernel and user application occupy different protected address spaces. And while each user-space process occupies its own virtual address space, the kernel occupies a single address space. For more information, see the links in the Resources section.

The Linux kernel can be further divided into three gross levels. At the top is the system call interface, which implements the basic functions such as read and write. Below the system call interface is the kernel code, which can be more accurately defined as the architecture-independent kernel code. This code is common to all of the processor architectures supported by Linux. Below this is the architecture-dependent code, which forms what is more commonly called a BSP (Board Support Package). This code serves as the processor and platform-specific code for the given architecture.





Properties of the Linux kernel

When discussing architecture of a large and complex system, you can view the system from many perspectives. One goal of an architectural decomposition is to provide a way to better understand the source, and that's what we'll do here.

The Linux kernel implements a number of important architectural attributes. At a high level, and at lower levels, the kernel is layered into a number of distinct subsystems. Linux can also be considered monolithic because it lumps all of the basic services into the kernel. This differs from a microkernel architecture where the kernel provides basic services such as communication, I/O, and memory and process management, and more specific services are plugged in to the microkernel layer. Each has its own advantages, but I'll steer clear of that debate.

Over time, the Linux kernel has become efficient in terms of both memory and CPU usage, as well as extremely stable. But the most interesting aspect of Linux, given its size and complexity, is its portability. Linux can be compiled to run on a huge number of processors and platforms with different architectural constraints and needs. One example is the ability for Linux to run on a process with a memory management unit (MMU), as well as those that provide no MMU. The uClinux port of the Linux kernel provides for non-MMU support. See the Resources section for more details.





Major subsystems of the Linux kernel

Now let's look at some of the major components of the Linux kernel using the breakdown shown in Figure 3 as a guide.


Figure 3. One architectural perspective of the Linux kernel
One architectural perspective of the Linux kernel

System call interface

The SCI is a thin layer that provides the means to perform function calls from user space into the kernel. As discussed previously, this interface can be architecture dependent, even within the same processor family. The SCI is actually an interesting function-call multiplexing and demultiplexing service. You can find the SCI implementation in ./linux/kernel, as well as architecture-dependent portions in ./linux/arch. More details for this component are available in the Resources section.

Process management

What is a kernel?
As shown in Figure 3, a kernel is really nothing more than a resource manager. Whether the resource being managed is a process, memory, or hardware device, the kernel manages and arbitrates access to the resource between multiple competing users (both in the kernel and in user space).

Process management is focused on the execution of processes. In the kernel, these are called threads and represent an individual virtualization of the processor (thread code, data, stack, and CPU registers). In user space, the term process is typically used, though the Linux implementation does not separate the two concepts (processes and threads). The kernel provides an application program interface (API) through the SCI to create a new process (fork, exec, or Portable Operating System Interface [POSIX] functions), stop a process (kill, exit), and communicate and synchronize between them (signal, or POSIX mechanisms).

Also in process management is the need to share the CPU between the active threads. The kernel implements a novel scheduling algorithm that operates in constant time, regardless of the number of threads vying for the CPU. This is called the O(1) scheduler, denoting that the same amount of time is taken to schedule one thread as it is to schedule many. The O(1) scheduler also supports multiple processors (called Symmetric MultiProcessing, or SMP). You can find the process management sources in ./linux/kernel and architecture-dependent sources in ./linux/arch). You can learn more about this algorithm in the Resources section.

Memory management

Another important resource that's managed by the kernel is memory. For efficiency, given the way that the hardware manages virtual memory, memory is managed in what are called pages (4KB in size for most architectures). Linux includes the means to manage the available memory, as well as the hardware mechanisms for physical and virtual mappings.

But memory management is much more than managing 4KB buffers. Linux provides abstractions over 4KB buffers, such as the slab allocator. This memory management scheme uses 4KB buffers as its base, but then allocates structures from within, keeping track of which pages are full, partially used, and empty. This allows the scheme to dynamically grow and shrink based on the needs of the greater system.

Supporting multiple users of memory, there are times when the available memory can be exhausted. For this reason, pages can be moved out of memory and onto the disk. This process is called swapping because the pages are swapped from memory onto the hard disk. You can find the memory management sources in ./linux/mm.

Virtual file system

The virtual file system (VFS) is an interesting aspect of the Linux kernel because it provides a common interface abstraction for file systems. The VFS provides a switching layer between the SCI and the file systems supported by the kernel (see Figure 4).


Figure 4. The VFS provides a switching fabric between users and file systems
The VFS provides a switching fabric between users and file systems

At the top of the VFS is a common API abstraction of functions such as open, close, read, and write. At the bottom of the VFS are the file system abstractions that define how the upper-layer functions are implemented. These are plug-ins for the given file system (of which over 50 exist). You can find the file system sources in ./linux/fs.

Below the file system layer is the buffer cache, which provides a common set of functions to the file system layer (independent of any particular file system). This caching layer optimizes access to the physical devices by keeping data around for a short time (or speculatively read ahead so that the data is available when needed). Below the buffer cache are the device drivers, which implement the interface for the particular physical device.

Network stack

The network stack, by design, follows a layered architecture modeled after the protocols themselves. Recall that the Internet Protocol (IP) is the core network layer protocol that sits below the transport protocol (most commonly the Transmission Control Protocol, or TCP). Above TCP is the sockets layer, which is invoked through the SCI.

The sockets layer is the standard API to the networking subsystem and provides a user interface to a variety of networking protocols. From raw frame access to IP protocol data units (PDUs) and up to TCP and the User Datagram Protocol (UDP), the sockets layer provides a standardized way to manage connections and move data between endpoints. You can find the networking sources in the kernel at ./linux/net.

Device drivers

The vast majority of the source code in the Linux kernel exists in device drivers that make a particular hardware device usable. The Linux source tree provides a drivers subdirectory that is further divided by the various devices that are supported, such as Bluetooth, I2C, serial, and so on. You can find the device driver sources in ./linux/drivers.

Architecture-dependent code

While much of Linux is independent of the architecture on which it runs, there are elements that must consider the architecture for normal operation and for efficiency. The ./linux/arch subdirectory defines the architecture-dependent portion of the kernel source contained in a number of subdirectories that are specific to the architecture (collectively forming the BSP). For a typical desktop, the i386 directory is used. Each architecture subdirectory contains a number of other subdirectories that focus on a particular aspect of the kernel, such as boot, kernel, memory management, and others. You can find the architecture-dependent code in ./linux/arch.





Interesting features of the Linux kernel

If the portability and efficiency of the Linux kernel weren't enough, it provides some other features that could not be classified in the previous decomposition.

Linux, being a production operating system and open source, is a great test bed for new protocols and advancements of those protocols. Linux supports a large number of networking protocols, including the typical TCP/IP, and also extension for high-speed networking (greater than 1 Gigabit Ethernet [GbE] and 10 GbE). Linux also supports protocols such as the Stream Control Transmission Protocol (SCTP), which provides many advanced features above TCP (as a replacement transport level protocol).

Linux is also a dynamic kernel, supporting the addition and removal of software components on the fly. These are called dynamically loadable kernel modules, and they can be inserted at boot when they're needed (when a particular device is found requiring the module) or at any time by the user.

A recent advancement of Linux is its use as an operating system for other operating systems (called a hypervisor). Recently, a modification to the kernel was made called the Kernel-based Virtual Machine (KVM). This modification enabled a new interface to user space that allows other operating systems to run above the KVM-enabled kernel. In addition to running another instance of Linux, Microsoft® Windows® can also be virtualized. The only constraint is that the underlying processor must support the new virtualization instructions. See the Resources section for more information.


Resources Learn

  • The GNU site describes the GNU GPL that covers the Linux kernel and most of the useful applications provided with it. Also described is a less restrictive form of the GPL called the Lesser GPL (LGPL).

  • UNIX, MINIX and Linux are covered in Wikipedia, along with a detailed family tree of the operating systems.

  • The GNU C Library, or glibc, is the implementation of the standard C library. It's used in the GNU/Linux operating system, as well as the GNU/Hurd microkernel operating system.

  • uClinux is a port of the Linux kernel that can execute on systems that lack an MMU. This allows the Linux kernel to run on very small embedded platforms, such as the Motorola DragonBall processor used in the PalmPilot Personal Digital Assistants (PDAs).

  • "Kernel command using Linux system calls" (developerWorks, March 2007) covers the SCI, which is an important layer in the Linux kernel, with user-space support from glibc that enables function calls between user space and the kernel.

  • "Inside the Linux scheduler" (developerWorks, June 2006) explores the new O(1) scheduler introduced in Linux 2.6 that is efficient, scales with a large number of processes (threads), and takes advantage of SMP systems.

  • "Access the Linux kernel using the /proc filesystem" (developerWorks, March 2006) looks at the /proc file system, which is a virtual file system that provides a novel way for user-space applications to communicate with the kernel. This article demonstrates /proc, as well as loadable kernel modules.

  • "Server clinic: Put virtual filesystems to work" (developerWorks, April 2003) delves into the VFS layer that allows Linux to support a variety of different file systems through a common interface. This same interface is also used for other types of devices, such as sockets.

  • "Inside the Linux boot process" (developerWorks, May 2006) examines the Linux boot process, which takes care of bringing up a Linux system and is the same basic process whether you're booting from a hard disk, floppy, USB memory stick, or over the network.

  • "Linux initial RAM disk (initrd) overview" (developerWorks, July 2006) inspects the initial RAM disk, which isolates the boot process from the physical medium from which it's booting.

  • "Better networking with SCTP" (developerWorks, February 2006) covers one of the most interesting networking protocols, Stream Control Transmission Protocol, which operates like TCP but adds a number of useful features such as messaging, multi-homing, and multi-streaming. Linux, like BSD, is a great operating system if you're interested in networking protocols.

  • "Anatomy of the Linux slab allocator" (developerWorks, May 2007) covers one of the most interesting aspects of memory management in Linux, the slab allocator. This mechanism originated in SunOS, but it's found a friendly home inside the Linux kernel.

  • "Virtual Linux" (developerWorks, December 2006) shows how Linux can take advantage of processors with virtualization capabilities.

  • "Linux and symmetric multiprocessing" (developerWorks, March 2007) discusses how Linux can also take advantage of processors that offer chip-level multiprocessing.

  • "Discover the Linux Kernel Virtual Machine" (developerWorks, April 2007) covers the recent introduction of virtualization into the kernel, which turns the Linux kernel into a hypervisor for other virtualized operating systems.

  • Check out Tim's book GNU/Linux Application Programming for more information on programming Linux in user space.

  • In the developerWorks Linux zone, find more resources for Linux developers, including Linux tutorials, as well as our readers' favorite Linux articles and tutorials over the last month.

  • Stay current with developerWorks technical events and Webcasts.

Get products and technologies
  • Order the SEK for Linux, a two-DVD set containing the latest IBM trial software for Linux from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.

  • With IBM trial software, available for download directly from developerWorks, build your next development project on Linux.

Discuss
Get involved in the developerWorks community through our developer blogs, forums, podcasts, and community topics in our new developerWorks spaces.

Linux 核心開始支援 real-time

Linux kernel gains new real-time support

http://www.linuxdevices.com/news/NS9566944929.html

Oct. 12, 2006

TimeSys 報導,額外增加的 real-time 技術將從 2.6.18 版起被合併到主要 Linux 核心當中。該公司表示,在這之前 Linux 對於 real-time 的支援必須要安裝核心 patch,該 patch 部份是由TimeSys 的資深開放原碼開發者 Thomas Gleixner 所開發的。

Gleixner 是 hrtimer (high-resolution timer) 子系統的主要作者,也是 Ingo Molnar 的 real-time preemption(強佔點)patch 的主要貢獻者。2.6.18 核心的 changelog 當中,有 136 個 patch 是 Gleixner 的創作,其他 143 則是由替 Red Hat 工作的 Molnar 所貢獻。

根據 TimeSys 表示,包含 real-time 技術的 2.6.18 版將節省個別核心開發者需要特別維護real-time kernel trees 的時間。此外,嵌入式 Linux 開發者或一般桌上型使用者,想要 build 可以達到 ms 等級的 real-time 反應時間,將不在需要使用任何 patch 了。

Gleixner 提到, "I am pleased that we can simplify development for real-time embedded devices by bringing this technology into the mainline kernel."

根據 TimeSys 表示,之前的 Linux 核心版本只能以 patch 的方式提供額外的 real-time 功能。他們也提到那些 patch 可以透過 LinuxLink suite (http://www.timesys.com/products/developer_suite.htm)這個以訂閱為主的線上服務取得。

TimeSys CEO Larry Weidman 表示, "The inclusion of real-time capabilities in the kernel validates the work of TimeSys in this space. Our customers that require real-time capabilities can be confident that they are on a path that has a clear future."

Weidman 還提到,"By making real-time extensions available to all LinuxLink customers, we hope to make a supported real-time solution affordable to a wider audience."

TimeSys 長久以來對於 real-time Linux 持續投入關注。在採用以服務為基礎的商業模式之前,將其 Linux 套件定位在 "single-kernel real-time" 上。MontaVista、FSMLabs、LynuxWorks、Red Hat 以及其他都是對於這個領域的擁護者。

OS X 如何執行應用程式

How OS X Executes Applications

http://0xfe.blogspot.com/2006/03/how-os-x-executes-applications.html

這篇文章解釋 Mac OS X 的執行檔格式。有別於其他的 UN*X,Mac OS X 並不是採用 ELF 格式的執行檔,它採用的格式是 Mach-O,同時 Mach-O 也是一個 ABI (Application Binary Interface),它解釋一個執行檔如何被 kernel 載入並且執行,其中包括:

* Which dynamic loader to use.
* Which shared libraries to load.
* How to organize the process address space.
* Where the function entry-point is, and more.

Mach-O 原來是為 NeXTstep OS 設計的,在 Motorola 68000 處理器上執行,後來被 OpenStep 採用,並於 x86 上執行。

Mach-O 檔案區分為三個區域,包括一個 header、一個載入命令(load commands)、以及一個 raw segment data。前二個描述程式的特色(features)、佈局(layout)以及檔案的其他特徵,而 raw segment data 區域則包含了一系列由 load Commands 所參考的位元組。

在 Mac OS X 上要察看執行檔的各個部份的相關資訊並非使用 ldd 或 objdump,Mac OS X 提供了更好用的工具 otool。

文章後面提到如何利用 otool 來察看執行檔是如何安排的。現簡介如下:


The Header

使用 otool -h 可以看見 header,第一個資訊是 magic number,除了用來判斷是 32-bit 或是 64-bit 的 Mach-O 之外,它也用來確認 CPU endianness(http://en.wikipedia.org/wiki/Endianness)的形態。相關的定義在: /usr/include/mach-o/loader.h。 cputype 是讓核心確認該執行檔是在正確的 CPU 上執行,相關的定義在:

/usr/include/mach/machine.h。另外,若是可以在 PPC, x86 上執行的 Universal Binaries 會多出一個 fat_header,可以透過 -f 來察看。cpusubtype 則是 CPU 額外的資訊。filetype 則確認檔案如何被安排與使用,可以透過它分辨出該檔案是 library, executable 或是 core file。前述的 filetype 與 MH_EXECUTE 相等。後面二個則提到 load commands 的數量與大小。flags 則代表其他不同的核心功能。


Load Commands

load Commands 包含一系列的命令告訴核心如何載入各種 raw segments 區段的位元組。基本上都描述每個 segment 如何在記憶體中安排、保護以及佈置。使用 otool -l 可以看見該區域。

LC_SEGMENT 0-3: 指出 segment 如何在記憶體當中映射,一個 segment 可以包含 0 個以上的 sections,詳見後面。

LC_LOAD_DYLINKER:
 指定使用哪種 dynamic linker,預設是 OS X 的 /usr/bin/dyld

LC_SYMTAB, LC_DYNSYMTAB:
 指定檔案與 dynamic linker 所使用的 symbol tables

LC_TWOLEVEL_HINTS:
 則包含二階名稱空間的的 hint table


Segments and Sections

一個 segment 是一連串的位元組,可以被核心與 dynamic linker 直接映射到虛擬記憶體當中。 header 與 load commands 被視為檔案中前二個 segment。 通常一個 OS X 的執行檔包含下列五種 segment:

__PAGEZERO :
   指出虛擬記憶體當中的 address 0 並且沒有保護
   權限。這個 segment 不佔檔案空間,而且會因為
   存取 NULL 導致立即 crash。

__TEXT : 包含 read-only data 與 executable code.

__DATA : 包含 writable data. 這些 sections 通常被
      核心標記為 copy-on-write。

__OBJC : 包含被 Objective C 語言所使用的 runtime。

__LINKEDIT : 包含被 dynamic linker 所使用的 raw data。

其中 __TEXT 與 __DATA segments 可以包含 0 或多個 section,每個 section 包含一種特定的資料。例如執行碼、字串或是內容。section 的內容可以用 otool -s 看見。要反組譯 __text section 使用 otool -tv。


如何執行應用程式

1. shell 呼叫 fork() system call

2. fork 建立 calling process(即 shell)的 calling process 邏輯拷貝並且安排它執行。這個子程序會呼叫 execve() system call ,提供被執行程式的路徑。

3. 核心載入特定的檔案,並檢查 header 是否為合法的 Mach-O 然後開始解譯 load commands 並且取代子程序的空間置換成 segments

4. 在此同時,核心也會執行 dynamic linker,動態連結相關的 libraries,如果執行檔的 symbols 都足夠的話,就會呼叫程式的 entry-point function

5. entry-point function 通常是一個與 /usr/lib/crt1.o 靜態連結的標準 function。它會初始化核心的環境,並且執行 main() 。


The Dynamic Linker

/usr/bin/dyld 是負責載入相依的共用 libraries,匯入相關的 symbols 與 functions,並且與目前的程序 "黏合"(binding)。當程序開始時,所有 linker 的程式碼都是載入共用 libraries 到程序的空間。然後看程式如何 build,真正的黏合會在執行時表現出不同的階段:

* Immediately after loading, as in load-time binding.
* When a symbol is referenced, as in just-in-time binding.
* Before the process is even executed, an optimization
technique known as pre-binding

如果沒有特別指定黏合動作,just-in-time binding 將是預設。

一個應用程式只有在所有的 symbols 與 segments 都被解析的情況下才能執行。/usr/bin/dyld 會透過預設路徑來尋找 libraries 或是 frameworks。 DYLD_LIBRARY_PATH 或DYLD_FALLBACK_LIBRARY_PATH 環境變數可以用來設定這些路徑。


完了

你可以知道要載入一個執行檔是多麼複雜的事情,這裡已經儘可能的包含所有的資訊,不過還有許多細節沒提到。作者建議大家參考下面這些文件或網站:

Mac OS X ABI Mach-O File Format Reference
Executing Mach-O Files
Overview of Dynamic Libraries
The otool man page
The dyld man page
/usr/include/mach/machines.h
/usr/include/mach-o/loader.h