boundaries. For good measure, align all other objects to cache
lines boundaries.
Use the new arch_loadseg I/F to keep track of kernel text and
data so that we can wire as much of it as is possible. It is
the responsibility of the kernel to link critical (read IVT
related) code and data at the front of the respective segment
so that it's covered by TRs before the kernel has a chance to
add more translations.
Use a better way of determining whether we're loading a legacy
kernel or not. We can't check for the presence of the PBVM page
table, because we may have unloaded that kernel and loaded an
older (legacy) kernel after that. Simply use the latest load
address for it.
sizes are not supported.
o Map the PBVM page table.
o Map the PBVM using the largest possible power of 2 that is less than
the amount of PBVM used and round down to a valid page size. Note
that the current kernel is between 8MB and 16MB in size, which would
mean that 8MB would be the typical size of the mapping, if only 8MB
wasn't an invalid page size. In practice, we end up mapping the first
4MB of PBVM in most cases.
between kernel virtual address and physical address anymore. This so
that we can link the kernel at some virtual address without having
to worry whether the corresponding physical memory exists and is
available. The PBVM uses 64KB pages that are mapped to physical
addresses using a page table. The page table is at least 1 EFI page
in size, but can grow up to 1MB. This effectively gives us a memory
size between 32MB and 8GB -- i.e. enough to load a DVD image if one
wants to.
The loader assigns physical memory based on the EFI memory map and
makes sure that all physical memory is naturally aligned and a power
of 2. At this time there's no consideration for allocating physical
memory that is close to the BSP.
The kernel is informed about the physical address of the page table
and its size and can locate all PBVM pages through it.
The loader does not wire the PBVM page table yet. Instead it wires
all of the PBVM with a single translation. This is fine for now,
but a follow-up commit will fix it. We cannot handle more than 32MB
right now.
Note that the loader will map as much of the loaded kernel and
modules as possible, but it's up to the kernel to handle page faults
for references that aren't mapped. To make that easier, the page
table is mapped at a fixed virtual address.
speculative loads. This at least makes control speculative loads
work. In the future we should analyze which faults/exceptions
we want to handle rather than defer to avoid having to call the
recovery code when it's not strictly necessary.
1. Make libefi portable by removing ia64 specific code and build
it on i386 and amd64 by default to prevent regressions. These
changes include fixes and improvements over previous code to
establish or improve APIs where none existed or when the amount
of kluging was unacceptably high.
2. Increase the amount of sharing between the efi and ski loaders
to improve maintainability of the loaders and simplify making
changes to the loader-kernel handshaking in the future.
The version of the efi and ski loaders are now both changed to 1.2
as user visible improvements and changes have been made.
things over floppy size limits, I can exclude it for release builds or
something like that. Most of the changes are to get the load_elf.c file
into a seperate elf32_ or elf64_ namespace so that you can have two
ELF loaders present at once. Note that for 64 bit kernels, it actually
starts up the kernel already in 64 bit mode with paging enabled. This
is really easy because we have a known minimum feature set.
Of note is that for amd64, we have to pass in the bios int 15 0xe821
memory map because once in long mode, you absolutely cannot make VM86
calls. amd64 does not use 'struct bootinfo' at all. It is a pure loader
metadata startup, just like sparc64 and powerpc. Much of the
infrastructure to support this was adapted from sparc64.
Previous kernels unwantingly depended on this mapping, but as
of version 1.123 of src/sys/ia64/ia64/machdep.c this dependency
has been removed. Consequently, one has to update the kernel
before updating the loader. The documented/recommended upgrade
will suffice in this case.
Due to a visible (from the kernels point of view) change in
behaviour, bump the loader version number from 0.3 to 1.0.
Approved by: re (carte blanc)
pages are 4KB.
o As a second order fix, don't assume we have enough space
after the bootinfo block left in a page to hold the memory
map.
o A third order fix as that we removed the assumption that a
bootinfo block fits in a single 8KB page.
PR: ia64/39415
submitted by: Espen Skoglund <esk@ira.uka.de>
- Don't include ia64_cpu.h and cpu.h
- Guard definitions by _NO_NAMESPACE_POLLUTION
- Move definition of KERNBASE to vmparam.h
o Move definitions of IA64_RR_{BASE|MASK} to vmparam.h
o Move definitions of IA64_PHYS_TO_RR{6|7} to vmparam.h
o While here, remove some left-over Alpha references.
register r8. We continue to write the bootinfo block at the same
hardwired address, because the kernel still expects it there.
It is expected that future kernels use register r8 to get to the
bootinfo block and don't depend on the hardwired address anymore.
Bump the loader version once again due to the interface change.