2007年8月17日 星期五

呈現在空氣中的 3D 影像

呈現在空氣中的 3D 影像

-- 使用雷射電漿將「真正的 3D 影像」視覺化

Three Dimensional Images in the Air
- Visualization of "real 3D images" using laser plasma -
http://www.aist.go.jp/aist_e/latest_research/2006/20060210/20060210.html

Key Points

1. 我們使用雷射所產生的電漿技術在空氣中製造閃光點(flashpoint)
2. 我們藉由最佳化雷射光束,大大地改進電漿的亮度、對比與生產距離。
3. 利用雷射電漿,我們首次成功的在空間中顯示「真正的 3D 影像」,除了空氣之外什麼都沒有。


概要

在 National Institute of Advanced Industrial Science and Technology(AIST, 總裁: Hiroyuki Yoshikawa)、慶應義塾大學(Keio University ,校長:Yuichiro Anzai)與 Burton Inc.(CEO: Hidei Kimura)的合作下,已成功完成實驗性裝置的製造,該裝置能在空間中顯示由點陣列構成的 "真正 3D 影像。"

到目前為止,大部分已報導的 3D 顯示都是在 2D 平面上利用人類雙眼視覺像差(binocular disparity)描繪假的 3D 影像。然而,這會產生許多問題,例如,有限的視野,以及因為虛擬影像錯判(misidentification)在生理上所引起的不適。

我們所開發的裝置利用已聚焦雷射光束之焦點附近的電漿放射現象。藉由控制聚焦雷射在 x-, y-, z- 軸上的位置,我們成功的顯示由空氣中的點陣列所構成的真正 3D 影像。


研究歷史

慶應義塾大學與 Burton Inc. 注意到一個現象,當雷射光束能強力聚焦時,可在焦點附近誘發空氣電漿放射。因此,它們成功地實驗性製造出一種,由點陣列(結合雷射光源與電流計鏡 [galvanometric mirrors] 所構成)所構成,能在空氣當中顯示 2D 影像的裝置。為了更進一步在空氣當中顯示3D 影像,能夠沿著雷射光軸將(雷射光)焦點在深處方向掃描(scanning)是必要的。然而,為達此目標,雷射的品質與改變焦點位置的技術都必須要改進,也因此當時並未製造出 3D 顯示裝置。



研究細節

在 AIST、慶應義塾大學與 Burton Inc. 的合作之下,在前述 2D 裝置中增添線性馬達系統;高品質、高亮度之紅外線脈衝雷射,並成功地利用這種裝置所構成的點陣列在空間當中顯示 3D 影像。

線性馬達系統讓雷射焦點的位址,能經由馬達軌道上透鏡組的高速掃描而改變。在該系統的合作之下,讓影像在 z 軸方向掃描成為可能。為了要掃 x與 y 軸方向,利用了傳統的電流計鏡。

我們在此使用的雷射光源是高品質、高亮度的紅外線脈衝雷射(脈衝重複頻率約 100 Hz),由此電漿產生可以更精確地被控制,讓更明亮、對比更高的影像能夠繪出。此外,裝置與描繪點之間的距離可大大地延展(數公尺)。

雷射脈衝的放射時間是在奈秒(10^-19 秒)等級。我們的裝置在每一點上使用一個脈衝,對此,人眼可利用視覺暫留(after-image)效應來識別出電漿放射,且能夠達到每秒 100 點的顯示。

藉由將這些脈衝同步化,並透過軟體來控制我們的裝置,我們可以在空氣中繪出任何 3D 物體。

下面是利用我們的裝置所顯示出來的各種圖形。

※ 現在該團隊有新的改進了,可以在空氣中繪出「イ」這個片假名。

* 産総研:プレス・リリース 「空間立体描画(3Dディスプレー)」技術の高性能化実験に成功
http://www.aist.go.jp/aist_j/press_release/pr2007/pr20070710/pr20070710.html

* AIST improves 3D projector ::: Pink Tentacle
http://www.pinktentacle.com/2007/07/aist-improves-3d-projector/

* AIST develops 3D image projector ::: Pink Tentacle
http://www.pinktentacle.com/2006/02/aist-develops-3d-image-projector/

資源 - BIOS 及 IO

■ EFI (Extensible Firmware Interface)

2007年8月15日 星期三

The Segment Descriptor Cache

The Segment Descriptor Cache

By Robert R. Collins


It is easy to underestimate the importance of something when you don’t know what it is or how it works. As early as the 80286, all Intel x86 processors have included an entity called the "segment-descriptor cache" which works behind the scenes, hidden from you. It is updated each time a segment register is loaded. It is used for all memory accesses by all Intel x86 processors since the 80286. If you’re an end user, you’ve probably used programs that depend upon the functions of the segment-descriptor cache. If you’re an engineer, there is a high probability that you’ve relied upon the functions of the segment-descriptor cache – and you might not have realized it. If you’re an engineer who writes any low-level code, programs hardware, or programs in protected mode, then you should be aware of the segment-descriptor cache and how it works.
From the 80286 to 80486, the meaning of "segment-descriptor cache" was unambiguous, referring to an internal microprocessor structure that stores the internal representation of the segment registers. This representation includes the segment base address, limit, and access rights. With the Pentium, Intel introduced a 94-entry, two-way set associative cache of segment-descriptor cache entries. Therefore, the phrase "segment-descriptor cache" is now ambiguous, with two possible meanings. Making matters worse, the new segment-descriptor cache was removed from the Pentium Pro design, but reintroduced in the Pentium II. (The lack of the new segment-descriptor cache in the Pentium Pro largely accounted for its poor 16-bit performance.) In this column, I’ll discuss the original segment-descriptor cache that has existed since the 80286 (and remains in all modern Intel x86 processors) and the role of the segment-descriptor cache in microprocessor memory management.

Loading Descriptor Cache Registers

Whether in real, protected, virtual-8086, or system-management mode, the microprocessor stores the base address of each segment in a hidden descriptor-cache registers. Each time a segment register is loaded, the segment-base address, segment-size limit, and segment-access attributes (access rights) are loaded (cached) into these hidden registers. To enhance performance, subsequent memory references are made via the descriptor-cache registers. Without this optimization, each memory access would require the microprocessor to perform many time-consuming tasks. In real mode, the microprocessor would need to calculate the physical address from the segment register value. The access rights would always indicate a read/write data segment (even for the code segment). The limit would always be 64 KB of memory. In protected mode, the segment base must be looked up in the appropriate descriptor table. The segment base is composed of a combination of fields in the descriptor table. The segment access rights and segment limit are also contained in the descriptor table. The microprocessor would need to access those structures for each memory access. These descriptor-table values reside in memory, where accesses tend to be slow when compared to accesses within the microprocessor. Therefore, without an internal segment-descriptor cache to cache these values, each memory access would implicitly require many other accesses to memory.
Now consider the differences between real mode and protected mode. If the segment descriptor cache didn’t exist, determining segment base, limit, and access rights would require more than one CPU cycle to complete. Therefore, the segment-descriptor cache exists to eliminate these potential deficiencies. It exists to allow all of these differences to be resolved at the time each segment is loaded. The performance penalty is incurred only once. Thereafter, all memory management is performed according to the values in the segment descriptor caches for each respective segment register.
At power-up, the descriptor-cache registers are loaded with fixed, default values – the CPU is in real mode, and all segments are marked as read/write data segments, including the code segment (CS). According to Intel, each time any segment register is loaded in real mode, the base address is calculated as 16 times the segment value, while the access rights and size limit attributes are given fixed, "real-mode compatible" values. This is not true. In fact, only the CS descriptor caches for the 286, 386, and 486 get loaded with fixed values each time the segment register is loaded. Loading CS, or any other segment register in real mode, on later Intel processors doesn’t change the access rights or the segment size limit attributes stored in the descriptor cache registers. For these segments, the access rights and segment size limit attributes from any previous setting are honored. Thus, it is possible to have a four-GB read-only data segment in real mode on the 80386 – but Intel won’t acknowledge this mode of operation, though it is implicitly supported. Furthermore, Intel can’t remove it without rendering many software programs ineffective.
Protected mode differs from real mode in this respect: As each time a segment register is loaded, the descriptor cache register gets fully loaded; no values are honored. The descriptor cache is loaded directly from the descriptor table. The CPU checks the validity of the segment by testing the access rights in the descriptor table. Complete checks are made, and illegal values will generate exceptions. Any attempt to load CS with a read/write data segment will generate a protection error. Likewise, any attempt to load a data segment register as an executable segment will also generate an exception. The CPU strictly enforces these protection rules. If the descriptor-table entry passes all the tests, then the descriptor-cache register gets loaded.

Format of Descriptor Cache Registers

The layout of the segment-descriptor cache registers change with almost every processor generation, though their functions do not. These differences are known as "implementation specific" because their exact layout and contents depend on the design and implementation of the microprocessor. For the most part, the fields of the segment-descriptor cache mirror the fields in the protected mode descriptor table. For 32-bit descriptor-table entries, the segment-base address, segment-access rights, and segment-limit fields are not contiguous. These related fields are combined before being put in the segment-descriptor cache. Figure 1 shows the relationship between fields in the descriptor table and segment-descriptor cache.
Figure 1: Combining fields from the descriptor table into the segment-descriptor cache.
Offset
63..56
55
54
53
52
51..48
47
46..45
44
43..40
39..16
15..00
Description
Base[31:24]
G
D/B
0
AVL
Limit[19:16]
P
DPL
S
Type
Base[23:00]
Limit[15:00]
Segment Base Segment Access Rights Segment Limit

Segment-Descriptor Cache

It is useful to know the layout of the fields inside the segment-descriptor cache. The segment base and segment limit are always combined from the descriptor table to form a complete base address and segment limit inside the segment-descriptor cache. The format of the access rights within the segment-descriptor cache changes from implementation to implementation. Likewise, the order of the fields within the cache can change. Regardless, knowing the format of the segment-descriptor cache can make you more productive, reducing both development and debugging time. Tables 1 through 4 show the descriptor cache entry format for all Intel x86 processors from the 80286 through Pentium Pro.
Table 1 - 80286 Descriptor Cache Entry
Bit Position 47..32 31 30..29 28 27..24 23..00
Description 16-bit Limit P DPL S Type 24-bit Base
Table 2 - 80386 and 80486 Descriptor Cache Entry
Bit Position 95..64 63..32 31..24 23 22..21 20 19..16 15 14 13..0
Description 32-bit Limit 32-bit Base 0 P DPL S Type 0 D / B 0
Table 3 - Pentium Descriptor Cache Entry
Bit Position 95..79 78 77..72 71 70..69 68 67..64 63..32 31..00
Description 0 D/B 0 P DPL S Type 32-bit Base 32-bit Limit
Table 4 - Pentium Pro Descriptor Cache Entry
Bit Position 95..64 63..32 31 30 29..24 23 22..21 20 19..16 15..00
Description 32-bit Base 32-bit Limit 0 D/B 0 P DPL S Type Segment Selector

Descriptor-Cache Registers In Real Life

There are different ways to take advantage of the segment-descriptor cache registers. System-management mode (SMM) gives you direct control over each field in the segment-descriptor cache. (See my DDJ January/March/May 1997 columns for an in-depth look at System Management Mode.) In-circuit emulators (ICEs) also allow direct control over each field in the segment descriptor cache. (Refer to my DDJ July/September/November 1997 columns for information on in-circuit emulation.)
For instance, when writing any low-level assembly-language programs (such as OS kernels, device drivers, BIOS, or protected-mode programming), I make common, simple mistakes. I sometimes make a mistake when creating my segment descriptor table, usually the Global Descriptor Table (GDT). I may have created the GDT using an incorrect base address, segment limit, or access rights. Ultimately, my program fails, and I must use the ICE as a debugging tool. I’ll then insert the undocumented ICEBP instruction into my code to instruct the ICE to breakpoint at the suspected point of failure (see http://www.x86.org/secrets/opcodes/ICEBP.html). Within moments, I discover that I used incorrect values in building the descriptor table. Using the ICE, I can load each field of the segment-descriptor cache. If I used an incorrect segment base address, I can correct it and continue. Likewise, I can make the same corrections for the segment limit and segment access rights. I know that these values are "sticky," meaning that they don’t get changed until a new segment register value is loaded. Therefore, I can make these changes, and continue debugging my program. Using this technique, I can usually discover six or more bugs in my program before recompiling. Because I don’t need to recompile my program after discovering each and every mistake, I save valuable development and debugging time.
Programming in SMM implicitly takes advantage of the segment-descriptor cache registers. The segment-descriptor cache registers are saved and restored along with the remaining microprocessor state in the SMM state save map. These values are saved and restored upon entry and exit to system-management mode. In my March 1997 DDJ column, I disclosed all of the undocumented fields (known as "reserved" fields in Intel parlance) in the Pentium SMM state save map. As I discussed, the segment-descriptor cache registers are stored in these reserved fields.
It is possible to manipulate these segment-descriptor cache values from within the SMM handler. The segment base may be changed to a value that is inconsistent with its associated segment register value. The segment access rights may be manipulated to give current-privilege-level-3 (CPL-3) tasks CPL-0 access (an obvious breach of security). The segment limit may be changed to create a segment with a four-GB limit while in real mode. Using SMM, it is possible to change the segment attributes to values that are programmatically impossible; for example, a real-mode segment at two MB, a segment limit size of 4-gigabytes minus 16, or a read/write code segment in protected mode (not to mention CPL-0 access within a CPL-3 task).

Descriptor Cache Anomalies and creating "Unreal Mode"

Using either of these methods to manipulate segment-descriptor caches can be challenging. However, there’s another programatic way of putting segment-descriptor caches to work – creating a CPU operating mode known as "unreal" mode.
Unreal mode is created when a real-mode segment has a four-GB segment limit. Unreal mode can be created without any hardware debuggers or SMM programming with a simple assembly-language program. Imagine a program that begins in real mode then transitions into protected mode. Once in protected mode, the program loads all of the segment registers with descriptors containing four-GB segment limits. After setting the segment limits, return immediately to real mode without restoring the segment registers to segments containing 64-KB segments (real-mode-compatible segments). Once in real mode, the segment limits will always retain their four-GB limits. Thereafter, DOS programs can take advantage of the entire 32-bit address space without resorting to protected-mode programming.
Unreal mode has been used commonly since it was discovered on the 80386. Unreal mode is so commonly used, in fact, that Intel has been forced to support this mode as part of legacy 80x86 behavior, though it’s never been documented. Memory managers and games often take advantage of unreal mode. Source code that demonstrates how you can create unreal mode is available electronically from DDJ (see "Resource Center,") or at ftp://ftp.x86.org/dloads/UNREAL.ZIP.
The real-mode code segment (CS) descriptor cache behavior has changed between generations of Intel processors. The role of the code segment-descriptor cache in real mode differs between the 80286, 80386, and 80486 and all later Intel microprocessors: The earlier microprocessors honor the real-mode segment access rights in real mode until a far control transfer occurs; later processors ignore any access rights in the CS descriptor cache irrespective of far control transfers. On the earlier processors, any far control transfer set the CS descriptor cache access rights to its real-mode compatible value as a readI-write data segment (value=0x93). Later processors leave the original value intact, but ignore its contents. Therefore, transitions from real to protected mode on the later processors immediately causes the behavior to revert to its stagnant CS descriptor-cache access rights value. On earlier processors, the CS limit is also restored to its real-mode compatible value (64 KB). Later processors leave the CS segment limit alone, making its behavior consistent with the other data segment registers.
From the 80286 to the Pentium, all Intel processors derive their current privilege level (CPL) from the SS access rights. The CPL is loaded from the SS descriptor table entry when the SS register is loaded. The undocumented LOADALL instruction (or system-management mode RSM instruction) can be used to manipulate the SS descriptor-cache access rights, thereby directly manipulating the CPL of the microprocessors. (See http://www.x86.org/articles/loadall/ for a description of LOADALL.) The Pentium Pro behaves differently: Once the CPL is loaded into the Pentium Pro, it is not internally derived from the SS access rights. The Pentium Pro retains a separate CPL register. Through the system-management mode RSM instruction, you can directly manipulate the CPL of the Pentium Pro, though not by manipulating the SS access rights value. (I will discuss the Pentium Pro SMM state save map and all of the secrets contained therein in a future column.)

Conclusion

I use the segment-descriptor cache registers every day – when I’m debugging on my ICE to help correct common protected-mode programming errors, programming in system-management mode to create events, or creating real-mode segments that can address the entire four-GB address space. The use of the segment-descriptor cache is highly implementation specific, meaning the behavior and layout of the segment-descriptor cache is dependant upon the implementation of the specific microprocessor. Intel doesn’t guarantee that the behavior of the descriptor cache will remain the same from microprocessor to microprocessor. Therefore, it would be foolhardy to write any production-quality source code which depends upon this behavior (except unreal mode).

2007年8月12日 星期日

完善的IT服務管理,ITIL、CMMI、COBIT等應納入考慮

‧IBM: ITIL不是IT服務管理的唯一聖經

記者馬培治/台北報導  10/08/2007

面對近來相當熱門的ITIL話題,藍色巨人IBM認為,要做好完善的IT服務管理,除了ITIL,CMMI、COBIT等標準也應納入考慮。

近日來由於itSMF(IT服務管理論壇)台灣分會成立,加上ITIL(IT Infrastructure Library, IT基礎架構集成)第三版的推出,ITIL一時之間成為廠商與企業間的熱門話題,CA、BMC與HP上個(7)月以來更相繼舉辦關於ITIL的活動,總共 吸引逾千位企業代表出席。不過,按兵不動的IBM認為,ITIL登上話題熱潮固然有助於市場推廣,但更強調「做好IT服務管理,CMMI與COBIT等標 準也應納入考慮,」IBM高層表示。

「企業應該評估目前需要提升的IT服務管理領域,選擇適合的標準遵行,」IBM架構服務與IT策略執行顧問Michael Shallcross表示,IT服務管理應該從使用者的角度,來決定採行的標準。他解釋道,若企業IT服務重點在營運面(operational), ITIL便很適合;但若企業IT的主要使用者是開發,則應採CMMI(Capability Maturity Model- Integrated,能力成熟度整合模式);若是治理或規劃,則為COBIT(Control Objectives for Information and related Technology,資訊與相關技術控制目標)。


IBM全球IT服務事業部顧問經理陳俊昌則說,IBM自1983年以來便已自行發展一套專屬的ITSM架構與方法論,此外也參與ITIL相關標準的制定與出版品編寫,「ITIL只是ITSM的一環,企業可以依據自身發展狀況與需求,同時採用諸多標準的精華,」他說。

ITSM(IT Service Management, IT服務管理)為1980年代英國商務部(OGC)提出的概念,是在將IT視為服務的前提下,透過管理的手段,來提升、確保IT服務的品質。OGC與並據 此發展出ITIL架構,欲以流程與方法論協助企業達到良好ITSM的目標。也因為ITIL是針對ITSM所制定而成,使得一般企業一聽到ITSM,便會自 然想到ITIL。

不過由於IT服務定義廣泛,軟體開發、IT治理等亦會影響IT服務品質的內容,也因此被認為應納入ITSM的範圍,這也是為什麼IBM認為CMMI與COBIT的應與ITIL一併列入ITSM範圍的理由。

研究機構IDC企業應用研究經理曹永暉回應IBM的說法表示,ITIL是達到良好ITSM的途徑之一,但不是全部。

「但國內企業偏好採用既存的標準,因此一提到ITSM,企業往往會先想到ITIL,」曹永暉認為,ITIL雖以最佳實務(Best Practice)的觀點提供企業提升ITSM的準則,但畢竟沒有兩家企業是完全相同,他認為,與其追隨單一標準,企業不妨以ITIL架構為基礎,發展自 有的IT服務管理流程。

其它業者亦認同ITIL不是絕對,但確是進行ITSM很好的參考準則。CA資深技術顧問江禎義便說,要做到良好的ITSM,ITIL當然不是惟一的途徑,但他認為,對於沒有資源自行開發管理流程的企業來說,「有現成的標準可參考,會比較容易,」他說。

不過,對IBM提出CMMI與COBIT等標準亦應納入企業ITSM規劃,曹永暉認為,其他廠商未必不知道ITSM包含範圍不止ITIL,而是市場行銷的 考量問題。他認為,IBM同時提出ITIL、COBIT與CMMI,有利於產品的整合銷售,「不是每家推廣ITIL的廠商,產品都像IBM一樣完整,」曹 永暉說。

Linux 2.6 的 System Call:12 大類

‧Linux 2.6 的 System Call:12 大類
jollen 發表於 October 11, 2006 2:58 PM /本文出處: www.jollen.org 已取得原作者授權使用

把 Linux 提供的 sytsem call service 依分類做整理,並提供實作檔案。本表使用於 Jollen 的「2. GNU Toolchains 與 Embedded Linux Programming」課程中,在此提供給大家做參考。目前依據 Linux 2.6.11 原始碼製成,請搭配 2.6.11 以上的版本做研究。

no Syscall name Implementation file in Linux 2.6.11 (or above)

目前大致把 Linux 2.6.11 的 system call 分成以下幾個類別:

最後一個欄位列出該 system call 實作的檔案 (以 Linux 2.6.11 為主)。請注意,Blog 裡並沒有該檔案的 hyperlink,您可以複制本 blog,並準備一份 Linux 2.6.11 (or above) 的原始碼,以方便 link 到檔案做查詢。

Machine-dependent (i386)
101 sys_ioperm linux/arch/i386/kernel/ioport.c
110 sys_iopl linux/arch/i386/kernel/ioport.c
123 sys_modify_ldt linux/arch/i386/kernel/ldt.c
2 sys_fork linux/arch/i386/kernel/process.c
11 sys_execve linux/arch/i386/kernel/process.c
120 sys_clone linux/arch/i386/kernel/process.c
190 sys_vfork linux/arch/i386/kernel/process.c
243 sys_set_thread_area linux/arch/i386/kernel/process.c
244 sys_get_thread_area linux/arch/i386/kernel/process.c
26 sys_ptrace linux/arch/i386/kernel/ptrace.c
67 sys_sigaction linux/arch/i386/kernel/signal.c
72 sys_sigsuspend linux/arch/i386/kernel/signal.c
119 sys_sigreturn linux/arch/i386/kernel/signal.c
173 sys_rt_sigreturn linux/arch/i386/kernel/signal.c
179 sys_rt_sigsuspend linux/arch/i386/kernel/signal.c
186 sys_sigaltstack linux/arch/i386/kernel/signal.c
42 sys_pipe linux/arch/i386/kernel/sys_i386.c
59 sys_olduname linux/arch/i386/kernel/sys_i386.c
82 old_select linux/arch/i386/kernel/sys_i386.c
90 old_mmap linux/arch/i386/kernel/sys_i386.c
109 sys_uname linux/arch/i386/kernel/sys_i386.c
117 sys_ipc linux/arch/i386/kernel/sys_i386.c
192 sys_mmap2 linux/arch/i386/kernel/sys_i386.c
113 sys_vm86old linux/arch/i386/kernel/vm86.c
166 sys_vm86 linux/arch/i386/kernel/vm86.c
Filesystem
245 sys_io_setup linux/fs/aio.c
246 sys_io_destroy linux/fs/aio.c
247 sys_io_getevents linux/fs/aio.c
248 sys_io_submit linux/fs/aio.c
249 sys_io_cancel linux/fs/aio.c
36 sys_sync linux/fs/buffer.c
118 sys_fsync linux/fs/buffer.c
134 sys_bdflush linux/fs/buffer.c
148 sys_fdatasync linux/fs/buffer.c
183 sys_getcwd linux/fs/dcache.c
253 sys_lookup_dcookie linux/fs/dcookies.c
254 sys_epoll_create linux/fs/eventpoll.c
255 sys_epoll_ctl linux/fs/eventpoll.c
256 sys_epoll_wait linux/fs/eventpoll.c
86 sys_uselib linux/fs/exec.c
41 sys_dup linux/fs/fcntl.c
55 sys_fcntl linux/fs/fcntl.c
63 sys_dup2 linux/fs/fcntl.c
221 sys_fcntl64 linux/fs/fcntl.c
135 sys_sysfs linux/fs/filesystems.c
54 sys_ioctl linux/fs/ioctl.c
143 sys_flock linux/fs/locks.c
9 sys_link linux/fs/namei.c
10 sys_unlink linux/fs/namei.c
14 sys_mknod linux/fs/namei.c
38 sys_rename linux/fs/namei.c
39 sys_mkdir linux/fs/namei.c
40 sys_rmdir linux/fs/namei.c
83 sys_symlink linux/fs/namei.c
21 sys_mount linux/fs/namespace.c
22 sys_oldumount linux/fs/namespace.c
52 sys_umount linux/fs/namespace.c
217 sys_pivot_root linux/fs/namespace.c
169 sys_nfsservctl linux/fs/nfsctl.c
5 sys_open linux/fs/open.c
6 sys_close linux/fs/open.c
8 sys_creat linux/fs/open.c
12 sys_chdir linux/fs/open.c
15 sys_chmod linux/fs/open.c
30 sys_utime linux/fs/open.c
33 sys_access linux/fs/open.c
61 sys_chroot linux/fs/open.c
92 sys_truncate linux/fs/open.c
93 sys_ftruncate linux/fs/open.c
94 sys_fchmod linux/fs/open.c
99 sys_statfs linux/fs/open.c
100 sys_fstatfs linux/fs/open.c
111 sys_vhangup linux/fs/open.c
133 sys_fchdir linux/fs/open.c
193 sys_truncate64 linux/fs/open.c
194 sys_ftruncate64 linux/fs/open.c
198 sys_lchown linux/fs/open.c
207 sys_fchown linux/fs/open.c
212 sys_chown linux/fs/open.c
268 sys_statfs64 linux/fs/open.c
269 sys_fstatfs64 linux/fs/open.c
131 sys_quotactl linux/fs/quota.c
89 old_readdir linux/fs/readdir.c
141 sys_getdents linux/fs/readdir.c
220 sys_getdents64 linux/fs/readdir.c
3 sys_read linux/fs/read_write.c
4 sys_write linux/fs/read_write.c
19 sys_lseek linux/fs/read_write.c
140 sys_llseek linux/fs/read_write.c
145 sys_readv linux/fs/read_write.c
146 sys_writev linux/fs/read_write.c
180 sys_pread64 linux/fs/read_write.c
181 sys_pwrite64 linux/fs/read_write.c
187 sys_sendfile linux/fs/read_write.c
239 sys_sendfile64 linux/fs/read_write.c
142 sys_select linux/fs/select.c
168 sys_poll linux/fs/select.c
18 sys_stat linux/fs/stat.c
28 sys_fstat linux/fs/stat.c
84 sys_lstat linux/fs/stat.c
85 sys_readlink linux/fs/stat.c
106 sys_newstat linux/fs/stat.c
107 sys_newlstat linux/fs/stat.c
108 sys_newfstat linux/fs/stat.c
195 sys_stat64 linux/fs/stat.c
196 sys_lstat64 linux/fs/stat.c
197 sys_fstat64 linux/fs/stat.c
62 sys_ustat linux/fs/super.c
226 sys_setxattr linux/fs/xattr.c
227 sys_lsetxattr linux/fs/xattr.c
228 sys_fsetxattr linux/fs/xattr.c
229 sys_getxattr linux/fs/xattr.c
230 sys_lgetxattr linux/fs/xattr.c
231 sys_fgetxattr linux/fs/xattr.c
232 sys_listxattr linux/fs/xattr.c
233 sys_llistxattr linux/fs/xattr.c
234 sys_flistxattr linux/fs/xattr.c
235 sys_removexattr linux/fs/xattr.c
236 sys_lremovexattr linux/fs/xattr.c
237 sys_fremovexattr linux/fs/xattr.c
Linux Kernel
51 sys_acct linux/kernel/acct.c
184 sys_capget linux/kernel/capability.c
185 sys_capset linux/kernel/capability.c
136 sys_personality linux/kernel/exec_domain.c
1 sys_exit linux/kernel/exit.c
7 sys_waitpid linux/kernel/exit.c
114 sys_wait4 linux/kernel/exit.c
252 sys_exit_group linux/kernel/exit.c
258 sys_set_tid_address linux/kernel/fork.c
240 sys_futex linux/kernel/futex.c
104 sys_setitimer linux/kernel/itimer.c
105 sys_getitimer linux/kernel/itimer.c
128 sys_init_module linux/kernel/module.c
129 sys_delete_module linux/kernel/module.c
162 sys_nanosleep linux/kernel/posix-timers.c
259 sys_timer_create linux/kernel/posix-timers.c
260 sys_timer_settime linux/kernel/posix-timers.c
261 sys_timer_gettime linux/kernel/posix-timers.c
262 sys_timer_getoverrun linux/kernel/posix-timers.c
263 sys_timer_delete linux/kernel/posix-timers.c
264 sys_clock_settime linux/kernel/posix-timers.c
265 sys_clock_gettime linux/kernel/posix-timers.c
266 sys_clock_getres linux/kernel/posix-timers.c
267 sys_clock_nanosleep linux/kernel/posix-timers.c
103 sys_syslog linux/kernel/printk.c
Scheduling
34 sys_nice linux/kernel/sched.c
154 sys_sched_setparam linux/kernel/sched.c
155 sys_sched_getparam linux/kernel/sched.c
156 sys_sched_setscheduler linux/kernel/sched.c
157 sys_sched_getscheduler linux/kernel/sched.c
158 sys_sched_yield linux/kernel/sched.c
159 sys_sched_get_priority_max linux/kernel/sched.c
160 sys_sched_get_priority_min linux/kernel/sched.c
161 sys_sched_rr_get_interval linux/kernel/sched.c
241 sys_sched_setaffinity linux/kernel/sched.c
242 sys_sched_getaffinity linux/kernel/sched.c
Signals
0 sys_restart_syscall linux/kernel/signal.c
29 sys_pause linux/kernel/signal.c
37 sys_kill linux/kernel/signal.c
48 sys_signal linux/kernel/signal.c
68 sys_sgetmask linux/kernel/signal.c
69 sys_ssetmask linux/kernel/signal.c
73 sys_sigpending linux/kernel/signal.c
126 sys_sigprocmask linux/kernel/signal.c
174 sys_rt_sigaction linux/kernel/signal.c
175 sys_rt_sigprocmask linux/kernel/signal.c
176 sys_rt_sigpending linux/kernel/signal.c
177 sys_rt_sigtimedwait linux/kernel/signal.c
178 sys_rt_sigqueueinfo linux/kernel/signal.c
238 sys_tkill linux/kernel/signal.c
270 sys_tgkill linux/kernel/signal.c
Systems
43 sys_times linux/kernel/sys.c
57 sys_setpgid linux/kernel/sys.c
60 sys_umask linux/kernel/sys.c
65 sys_getpgrp linux/kernel/sys.c
66 sys_setsid linux/kernel/sys.c
74 sys_sethostname linux/kernel/sys.c
75 sys_setrlimit linux/kernel/sys.c
76 sys_old_getrlimit linux/kernel/sys.c
77 sys_getrusage linux/kernel/sys.c
88 sys_reboot linux/kernel/sys.c
96 sys_getpriority linux/kernel/sys.c
97 sys_setpriority linux/kernel/sys.c
121 sys_setdomainname linux/kernel/sys.c
122 sys_newuname linux/kernel/sys.c
132 sys_getpgid linux/kernel/sys.c
147 sys_getsid linux/kernel/sys.c
172 sys_prctl linux/kernel/sys.c
191 sys_getrlimit linux/kernel/sys.c
203 sys_setreuid linux/kernel/sys.c
204 sys_setregid linux/kernel/sys.c
205 sys_getgroups linux/kernel/sys.c
206 sys_setgroups linux/kernel/sys.c
208 sys_setresuid linux/kernel/sys.c
209 sys_getresuid linux/kernel/sys.c
210 sys_setresgid linux/kernel/sys.c
211 sys_getresgid linux/kernel/sys.c
213 sys_setuid linux/kernel/sys.c
214 sys_setgid linux/kernel/sys.c
215 sys_setfsuid linux/kernel/sys.c
216 sys_setfsgid linux/kernel/sys.c
149 sys_sysctl linux/kernel/sysctl.c
Time
13 sys_time linux/kernel/time.c
25 sys_stime linux/kernel/time.c
78 sys_gettimeofday linux/kernel/time.c
79 sys_settimeofday linux/kernel/time.c
124 sys_adjtimex linux/kernel/time.c
Kernel Timer & Process
20 sys_getpid linux/kernel/timer.c
27 sys_alarm linux/kernel/timer.c
64 sys_getppid linux/kernel/timer.c
116 sys_sysinfo linux/kernel/timer.c
199 sys_getuid linux/kernel/timer.c
200 sys_getgid linux/kernel/timer.c
201 sys_geteuid linux/kernel/timer.c
202 sys_getegid linux/kernel/timer.c
224 sys_gettid linux/kernel/timer.c
16-bit uid (wrapper functions)
16 sys_lchown16 linux/kernel/uid16.c
23 sys_setuid16 linux/kernel/uid16.c
24 sys_getuid16 linux/kernel/uid16.c
46 sys_setgid16 linux/kernel/uid16.c
47 sys_getgid16 linux/kernel/uid16.c
49 sys_geteuid16 linux/kernel/uid16.c
50 sys_getegid16 linux/kernel/uid16.c
70 sys_setreuid16 linux/kernel/uid16.c
71 sys_setregid16 linux/kernel/uid16.c
80 sys_getgroups16 linux/kernel/uid16.c
81 sys_setgroups16 linux/kernel/uid16.c
95 sys_fchown16 linux/kernel/uid16.c
138 sys_setfsuid16 linux/kernel/uid16.c
139 sys_setfsgid16 linux/kernel/uid16.c
164 sys_setresuid16 linux/kernel/uid16.c
165 sys_getresuid16 linux/kernel/uid16.c
170 sys_setresgid16 linux/kernel/uid16.c
171 sys_getresgid16 linux/kernel/uid16.c
182 sys_chown16 linux/kernel/uid16.c
Memory Management
250 sys_fadvise64 linux/mm/fadvise.c
225 sys_readahead linux/mm/filemap.c
257 sys_remap_file_pages linux/mm/fremap.c
219 sys_madvise linux/mm/madvise.c
218 sys_mincore linux/mm/mincore.c
150 sys_mlock linux/mm/mlock.c
151 sys_munlock linux/mm/mlock.c
152 sys_mlockall linux/mm/mlock.c
153 sys_munlockall linux/mm/mlock.c
45 sys_brk linux/mm/mmap.c
91 sys_munmap linux/mm/mmap.c
125 sys_mprotect linux/mm/mprotect.c
163 sys_mremap linux/mm/mremap.c
144 sys_msync linux/mm/msync.c
Swapfile
87 sys_swapon linux/mm/swapfile.c
115 sys_swapoff linux/mm/swapfile.c
Socket
102 sys_socketcall linux/net/socket.c

--jollen