Sunday, May 25, 2008

Solaris xVM ---PV Linux domU

Solaris system is reinstalled and vanilla. Xen is working. But still something needs to do from the beginning.
  • VNC configuration
  • virt-manager
To configure VNC to let it support xen display by the following command:
# svccfg -s xvm/xend setprop config/vncpaswd = astring:\"passwd\"
# svccfg -s xvm/xend setprop config/vnc-listen=astring:\"0.0.0.0\"
# svcadm refresh xvm/xend; svcadm restart xvm/xend

To install virt-manager, download the package and use the following command:
#pkgadd -d. SUNWvirt-manager

The one thing I left undone yet is paravirtual Linux domU. Generally, we have three methods to create a PV Linux domU: virt-install, virt-manager, and xm create.
virt-manager is GUI installlation and supports Fedora well. It is network install.
virt-install is command-line style but works substantially in diverse ways: netinstall, isoinstall, cdrom, etc.
xm create is command-line style too but mainly work in iso or cdrom install, which required configuration file prehand.

First, virt-manager. Due to our specific box, only centos works and DNS in guest still doesn't work properly. So I use the path address: http://128.153.145.19/pub/centos/5/os/i386/
The screenshot for installation can be seen as below:
I am just waiting for its completion and when it restarts, it looks like this:
While it is installing, it can not probe the video card. Either the driver is not ported or the virt-manager is not complete yet.
Note: so far I have no idea of where virt-manager has stored the configuration file for the guest. As I know, in virt-manager log it did log the xml format guest configuration.

To boot the installed CentOS domU guest, you need the following configuration file
name="centosP"
#kernel="/var/lib/xen/virtinst-vmlinuz.UAktlH"
#ramdisk="/var/lib/xen/virtinst-initrd.img.XxNCk0"
disk=['file:/export/home/newbie/centosP.img,xvda,w']
memory=1024
vif=['']
root="/dev/xvda"

Note: virtinst-vmlinuz.UAktlH and virtinst-initrd.img.XxNCk0 are created by virt-manager and when booting the guest, those two lines should be removed. Grub in the image will take over and boot up the guest. BTW, xvda is required in disk option rather than 0,w]

Graphics doesn't work for Paravirtualization in Solaris yet. Also even using virt-manager, CentOS cannot detect properly the video card type, so only text mode network installation works, using a generic video driver. ParaVirtualization is still black-and-white world. Some drivers are missing and need to be ported but they are in progress and almost there.

When we tried virt-install and xm create. It cannot install from iso because cdrom is not paravirtual ported. In another word, we didn't have cdrom paravirtual drivers there. We can have the CentOS boot up, but it will keep asking you the CD-ROM device driver. When booting a Linux-made , XENBUS said block device driver, nic device driver, and the console missing. The details can be seen as below:

It is said that one way to circle around CDROM is to use NFS. But I am not familiy

Solaris DNS

After saving the files from the last broken solaris, I need to reinstall the Solaris again. But this time, I didn't get that luck so that everything was smoothy. The network is broken. I have dual network cards in that box. One plugin for external, and the other integrated for internal. It only gets an IP 10.0.0.x and cannot work on the external one. I bet the plugg-in network card is broken. So I hook the external cable to the integrated network card. It can get an external IP now, but the DNS server address 2.35 is wrong. It can browse google by 61.233.169.104. But it can not go any further with DNS broken.

Without network, Solaris is a lonely isolated island. Even the sky is grey.

The solaris is default using the facility NWAM to auto configure the network. I have tried to switch to GNOME network configuration tool, it doesn't work that well. So I have to research in NWAM and make it work right.

For more details you can check the pages:
http://opensolaris.org/os/project/nwam/
and
http://docs.sun.com/app/docs/doc/806-1386/6jam5ahnd?a=view

Generally speaking, most of the websites introduce the way to configure DNS in solaris is to modify the related configuration file: /etc/resolv.conf and /etc/nsswitch.conf.
/etc/resolv.conf is to specify the DNS server, search domain, etc.
sample:
search clarkson.edu
nameserver 128.153.128.2
nameserver 128.153.4.2


/etc/nsswitch.conf is to specify the resolution order. To resolve an address, it has many ways: files, dns, mdns, etc.
key part of a sample
hosts: files dns mdns
ipnodes: files dns mdns

Originally, hosts option only has files, we need to add dns and mdns after it to provide more resolution methods.

How does DNS work in Solaris?
When there is an web address to be resolved, the DNS client in solaris FIRST consults the /etc/nsswitch.conf file to see which to look into first. In this example, the presumed order is local file first, DNS server second.
THEN, if the DNS client consults the local file and does not find an entry, the client consults the /etc/resolv.conf file to look through the name resolution search list and the address of the local DNS server.

It seems that as long as we make these two configure files right, then it is done. However, Solaris doesn't work as Linux does. Linux working style is to modify the configuration, and then restart the service, then all will be good. Solaris style works differently. Everything is in framework. No manually modifying configuration files but there are always tools or facilities to provide consistent modification.

Once the facility nwam can obtain IP from DHCP server, then the DNS should also work properly now. But there is another problem for Clarkson Network. The DHCP server in Clarkson gives me the 2.35 which is not the DNS server at all(I also tried ubuntu, the same result). The key problem is that Clarkson has multiple DNS servers. If it can broadcast and collect all the DNS server responses, then 2.35 is not a problem any more.

Solaris has the facility mdnsd (multicast DNS daemon)but it doesn't automatically run after booting. It is to broadcast to discover the DNS servers.

To make it work, first /etc/nsswitch.conf should be modified to add mdns after hosts option.
Then, to start the service by the following command:
# svcadm enable svc:/network/mdns:default
As the DNS daemon will cache the file. So once the configuration is done, Solaris needs to REBOOT.
-----------------------------------------ADDON-----------------------------------------------------------------------------------
The network interfaces in Solaris is stored in the file /etc/nwam/llp (LLP)
# cat /etc/nwam/llp
bge0 dhcp
rge0 dhcp

The host name and domain is stored in the file /etc/hosts
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain solaris

solaris broken

Solaris is broken again. Maybe it is because I can't wait till the system successfully shut down. So some system files are lost and I was asked to /lib/svc/bin/restore_repository.

But I can have the root console. The FS is read-only. I need to use the command below to change it to be read/write:
/lib/svc/method/fs-root
/lib/svc/method/fs-usr

For more details, you can check the page:
http://www.sun.com/msg/SMF-8000-MY

Now the situation is that I can execute some commands and I have some import files in my home directory, and I don't have network at all. How can I get those files out? The day without network is horribly dark~!

The system recommended to use fsck. I tried, but it doesn't help. It did can find some inconsistency but the Solaris seems to lose some system file.

I try to attach a USB drive to the box and hope it can recognize it. I use the command iostat as below to figure out if the disk can be recognized.
#iostat -En
It shows that the disk is there but it cannot recognize it as it appears as sda1 but not as c1d0...

Fortunately, that box has an The next that is worth trying is to burn those files into a CD. Then I checked the page
http://sysunconfig.net/unixtips/cdr.html
and
http://blogs.sun.com/reed/entry/how_to_burn_cd_dvd
sounds good.

I first package those files in the directory Desktop into ISO file, using the command as below:
# mkisofs -l -L -r -o /desk.iso  /export/home/newbie/Desktop

Then I will burn it into the CD by the following configuration and the command:
1) first check the CD-RW device number
#  cdrecord -scanbus
1,0,0 is the device number

2)configure the CD-RW
create a file as /etc/default/cdrecord, inside are
   CDR_FIFOSIZE=8m
CDR_SPEED=8
CDR_DEVICE=USCSI:1,0,0

3)burn the iso image into CD
# cdrecord -v /desk.iso
But it has burn error. Failed again.

Last strike is Live CD. First, I tried Ubuntu Live CD but it cannot recognize the FS. It requires UFS package. Then I have to try Solaris Live CD. Solaris Live CD has 2008.05 Live and installation CD, Benelix, and Schillix. I used 2008.05 release. It doesn't show the disk directly.
Still use the command iostat, you can see the disk is c5d0. Then use the mount command to mount it
# mount -F ufs /desk.iso /mnt
Then go to the directory /mnt, you can file there are three solaris directory there. Go to each one, there is one that is newbie's home directory. So move the ISO file into the home directory, and then copy the ISO from the home directory onto the LiveCD desktop. Then burn it into CD. It works finally.

---------------------------------------------------------ADDON---------------------------------------------------------------------------------------
Then reinstall the Solaris, it turned out that the network is broken. It kept get a wrong address as DNS server. The same to Ubuntu.
In Solaris, it has two styles to configure the network.
One is NWAM facility(automata network Management), the other is GNOME network configuration. You can use svc to alternate those two services.
For more details, man nwam or http://opensolaris.org/os/project/nwam/.

To see the services that are running, use
#svcs

To enable the default GNOME network configuration
     % svcadm enable svc:/network/physical:default
To disable the Solaris Network auto-magic
     % svcadm enable svc:/network/physical:nwam
To start the Solaris DNS server serivce
     % /lib/svc/method/dns-server start|stop

Friday, May 23, 2008

Solaris Virtualbox

Virtualbox is another full virtualization product, now bought by SUN and can work in solaris. But it cannot work in Solaris xVM. It will report "VirtualBox Kernel Driver not Loaded" error.

I am using OpenSolaris Express B87 and virtualbox1.6.

Virtualbox can be downloaded from the website http://virtualbox.org/wiki/Downloads

Virtualbox is available for all the platforms. For Solaris, it has 32-bit and 64-bit-AMD which corresponds to x-86 or AMD64. You can check by the command
bash-3.2# isainfo
amd64 i386

Then install the package by the pkgadd command
bash-3.2# pkgadd -d VirtualBox-1.6.0-SunOS-amd64-r30421.pkg
Package filename is VirtualBox-1.6.0-SunOS-amd64-r30421.pkg, which will install the package

For installation details, you can refer to READMe.txt, which contains all the instructions.

Run the Virtualbox by the command:
bash-3.2# VirtualBox
Note: VirtualBox V and B are capitalized.

Below is Virtualbox's application interface.

From now on, we will create a Solaris image in Virtualbox. Its image file is vmdk or vdi(virtual disk image). The image is created in the default directory /root/.VirtualBox/VDI. So if the image name is solarisV, it will be solarisV.vid

Once the solaris installation is done, the screenshot can be seen as below:
I saw a file looked like configuration file in the directory ~/.VirtualBox/Machines/solarisV/solarisV.xml.

Next, I tried windows on VirtualBox. It's just the same to follow the normal Windows XP installation steps.
The screenshot can be see as below:
Generally speaking, VirtualBox won't have any problem in installing any guest. It is full virtualization, and it is easy and simple to use and install. No tricky techniques there.



Wednesday, May 21, 2008

Solaris xVM (con't) --HVM guest

Still following the instructions on the page http://opensolaris.org/os/community/xen/docs/virtinstall/

Before going on to create the solaris HVM guest, we need first configure VNC. Facility svc will be used.
svccfg is used to configure the service
# svccfg -s xvm/xend setprop config/vncpasswd = astring: \"passwd\"
-s xvm/xend
-s specifies the service
setprop config/vncpasswd = astring: \"passwd\"
setprop specifies the command
Note: You can see that password is quoted by ""

Similiarly,
# svccfg -s xvm/xend setprop config/vnc-listen = astring: \"0.0.0.0\"
This command is to specify the vnc-listen


Another one is svcadm, which is used to start or stop service. It can "enable", "disable", "refresh", and "restart" the service. refresh is to reload the configuration file.
Here the service is xend, which is in xvm/xend
# svcadm refresh xvm/xend; svcadm restart xvm/xend
It restarts the xen daemon xend.

OK~~~~~! Now, let's create the HVM solaris guest.
machine:root>#virt-install -n solarisH --hvm -r 1024 --vnc -f /solarisH.img -s 8 -c /export/home/newbie/sol-nv-b87.iso
Still almost the same options as creating the paravirt solaris guest, -n solarisH specifies domain name, -r 1024 specifies ram size, -f /solarisH.img specifies guest image filename, -s 8 specifies the guest image size of 8G, -c /export/home/newbie/sol-nv-b87.iso indicates installing from ISO file.

The only differences are --hvm which specifies the guest works in HVM mode and --vnc which specifies the guest will have a VNC connection.

When you run the virt-install command, it will first prompt up a window asking VNC password which will be passwd as we have set up before. The rest installation process will look exactly like installing solaris on a real machine. The snapshot can be seen as below.
Note: The virt-install still has the problem with libvirt as below
libvir: Xen Daemon error : GET operation failed:

WARNING: virt-install doesn't create the configuration file for the guest image. It only creates an image with an OS system in it. The user himself needs to create the configuration file on their own. One simple configuration file can be as below:
A good reference is here: http://opensolaris.org/os/community/xen/docs/HVMdomains.htm
Once the installation is done, You can see the solaris HVM guest looks like below.

However, virt-install doesn't provide the configuration file for the created image. So we need to make on our own. One way to ask the system to create its configuration for us is to use the command virsh while the guest is running. The instructions can be seen on the page http://docs.sun.com/app/docs/doc/819-2450/6n4o5me9r?a=view
It will produce xml-format configuration file
#virsh dumpxml solarisH > solarisH.xml
Then we also need to use the facility virsh to boot the guest by xml configuration file
#virsh create solarisH.xml
Next is to connect to the VNC. We need to use the tool virt-manage.
#virt-manage
Then select the guest domain and press the button open who then will ask for the vnc password. Then a VNC window is popped up and the interface looks like below:
Another way is to manually create a configuration file as below whose name is solarisH.py and file type is python script.
##########################################################
# -*- mode: python; -*-
#============================================================================
# Python configuration setup for 'xm create'.
# This script sets the parameters used when a domain is created using 'xm create'.
# You use a separate script for each domain you want to create, or
# you can set the parameters for the domain on the xm command line.
#============================================================================

import os, re
arch = os.uname()[4]
if re.search('64', arch):
arch_libdir = 'lib64'
else:
arch_libdir = 'lib'

#----------------------------------------------------------------------------
# Kernel image file.
kernel = "/usr/lib/xen/boot/hvmloader"

# The domain build function. HVM domain uses 'hvm'.
builder='hvm'

# Initial memory allocation (in megabytes) for the new domain.
#
# WARNING: Creating a domain with insufficient memory may cause out of
# memory errors. The domain needs enough memory to boot kernel
# and modules. Allocating less than 32MBs is not recommended.
memory = 1024

# Shadow pagetable memory for the domain, in MB.
# Should be at least 2KB per MB of domain memory, plus a few MB per vcpu.
shadow_memory = 8

# A name for your domain. All domains must have different names.
name = "solarisH"

# 128-bit UUID for the domain. The default behavior is to generate a new UUID
# on each call to 'xm create'.
#uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"

#-----------------------------------------------------------------------------
# the number of cpus guest platform has, default=1
vcpus=1

# enable/disable HVM guest PAE, default=0 (disabled)
#pae=0

# enable/disable HVM guest ACPI, default=0 (disabled)
#acpi=1

# enable/disable HVM guest APIC, default=0 (disabled)
#apic=1

# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = "" # leave to Xen to pick
#cpus = "0" # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5

# Optionally define mac and/or bridge for the network interfaces.
# Random MACs are assigned if not given.

vif = [ 'type=ioemu' ]

#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.

#disk = [ 'file:/export/home/mydisk.raw,hdc,w', 'file:/export/home/install.iso,hda:cdrom,r' ]
disk = [ 'file:/solarisH.img,hda,w']
#disk = [ 'phy:/dev/dsk/c1d0p0,hdc,w', 'file:/export/home/install.iso,hda:cdrom,r' ]

#disk = [ 'phy:/dev/zvol/dsk/mypool/mydisk,hdc,w', 'file:/export/home/install.iso,hda:cdrom,r' ]

#----------------------------------------------------------------------------
# Configure the behaviour when a domain exits. There are three 'reasons'
# for a domain to stop: poweroff, reboot, and crash. For each of these you
# may specify:
#
# "destroy", meaning that the domain is cleaned up as normal;
# "restart", meaning that a new domain is started in place of the old
# one;
# "preserve", meaning that no clean-up is done until the domain is
# manually destroyed (using xm destroy, for example); or
# "rename-restart", meaning that the old domain is not cleaned up, but is
# renamed and a new domain started in its place.
#
# The default is
#
# on_poweroff = 'destroy'
# on_reboot = 'restart'
# on_crash = 'restart'
#
# For backwards compatibility we also support the deprecated option restart
#
# restart = 'onreboot' means on_poweroff = 'destroy'
# on_reboot = 'restart'
# on_crash = 'destroy'
#
# restart = 'always' means on_poweroff = 'restart'
# on_reboot = 'restart'
# on_crash = 'restart'
#
# restart = 'never' means on_poweroff = 'destroy'
# on_reboot = 'destroy'
# on_crash = 'destroy'

on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'preserve'

#============================================================================

# New stuff
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'

#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
boot="cda"
#boot='d'

#-----------------------------------------------------------------------------
# write to temporary files instead of disk image files
#snapshot=1

#----------------------------------------------------------------------------
# enable SDL library for graphics, default = 0
sdl=0

#----------------------------------------------------------------------------
# enable VNC library for graphics, default = 1
vnc=1

#----------------------------------------------------------------------------
# address that should be listened on for the VNC server if vnc is set.
# default is to use 'vnc-listen' setting from /etc/xen/xend-config.sxp
vnclisten="0.0.0.0"

#----------------------------------------------------------------------------
# set VNC display number, default = domid
#vncdisplay=1

#----------------------------------------------------------------------------
# try to find an unused port for the VNC server, default = 1
#vncunused=1

#----------------------------------------------------------------------------
# enable spawning vncviewer for domain's console
# (only valid when vnc=1), default = 0
vncconsole=1

vncpasswd=''

#----------------------------------------------------------------------------
# no graphics, use serial port
#nographic=0

#----------------------------------------------------------------------------
# enable stdvga, default = 0 (use cirrus logic device model)
stdvga=0

#-----------------------------------------------------------------------------
# serial port re-direct to pty deivce, /dev/pts/n
# then xm console or minicom can connect
serial='pty'
#serial='stdio'
#serial='file:/tmp/blah'
#serial='/dev/pts/0'
serial='null'


#-----------------------------------------------------------------------------
# enable sound card support, [sb16|es1370|all|..,..], default none
#soundhw='sb16'


#-----------------------------------------------------------------------------
# set the real time clock to local time [default=0 i.e. set to utc]
#localtime=1


#-----------------------------------------------------------------------------
# start in full screen
#full-screen=1


#-----------------------------------------------------------------------------
# Enable USB support (specific devices specified at runtime through the
# monitor window)
usb=1

# Enable USB mouse support (only enable one of the following, `mouse' for
# PS/2 protocol relative mouse, `tablet' for
# absolute mouse)
usbdevice='mouse'
usbdevice='tablet'
##########################################################
Note: two things in the script needs to be taken care of.
1) disk = [ 'file:/solarisH.img,hda,w']
Here you will see the disk is defined by file location, disk type, and access rights.
rather than in the pv one that you may see disk = [ 'file:/solarisH.img,0,w']
2) serial='null'
The console is none, which means when booting the guest, we cannot access by the console but we can use vnc.
Then we can boot the guest by the command
#xm create solarisH.xml
Note: here we didn't have the option -c for xm create. Neither can we use the command xm console to connect to the guest. VNC will be automatically called to display the graph of the guest.

References:
Here Sun provides a diversity of domUs ready for use: http://www.opensolaris.org/os/community/xen/How-To-8-15-06/install/AugDomUs/

Here Sun provides the sample of HOWTO make a HVM domU configuration file and HOWTO make a Solaris domU:
http://opensolaris.org/os/community/xen/docs/HVMdomains.htm
http://opensolaris.org/os/community/xen/docs/install-solaris-domu-iso.htm
http://blogs.sun.com/shalon/entry/a_summary_of_creation_of
--------------------------------------------------------ADDON-----------------------------------------------------------------------
How to choose VM ram and storage?
The machine I am using is
CPU
memory 4G
.....
When I am creating an solaris HVM guest, how much memory should it have and how much storage should be allocated for it?
Try 2G ram and 8G image
Problem 1: 2G memory is too much for a guest
it makes dom0 run slow.
Problem 2: 8G image is not enough for Solaris
when you installs Solaris express development edition, it automatically installs all the required packages and requires more than 8G, so I have installation failure. I guess it is because of running out of space.

Solution:
I choose 1G ram and 12G image for the guest, it works. You can see the working solaris guest as below:

Tuesday, May 20, 2008

Solaris xVM (xen on Solaris)

Xen comes along with Solaris Community Express Edition(SCXE).
I am using Nevada version 87 (SNV b87).

In the grub, it has Solaris, Solaris xVm, Solaris Failsafe. If no, you can follow the instructions on the page: http://opensolaris.org/os/community/xen/docs/setupsoldom0/
Facility bootadm will help you to operate on menu.lst
# bootadm list-menu = cat /boot/grub/menu.lst
# bootadm -m upgrade = automatically search the boot directory and modify the menu.lst

It is using this one as below, so it is 64bit xen compatible
#---------- ADDED BY BOOTADM - DO NOT EDIT ----------
title Solaris xVM
kernel$ /boot/$ISADIR/xen.gz
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix [*]
module$ /platform/i86pc/$ISADIR/boot_archive
#---------------------END BOOTADM--------------------


Note: in solaris 2008.5 released version, xen doesn't come along with it. But you can install the package using the package management application. However, bootadm doesn't work.

Once it is booted from Solaris xVM, you can use the command xm list to see dom0 is working as below:

Next is to create a Solaris domU. The instruction is on the page:http://opensolaris.org/os/community/xen/docs/virtinstall/

First, I am trying to create a Solaris guest in paravirtual mode. I found only the command below works:


machine:root> virt-install --nographics -n SolarisUU --paravirt -f /xvm/SolarisU.img -r 1024 --mac=aa:ff:bb:aa:28:16 -s 8 -l /export/home/newbie/sol-nv-b87.iso



  • --nographics

installation has no graphic mode, purely in console command mode

  • -n SolarisUU

the guest domain name will be SolarisUU

  • --paravirt

the guest will be created in paravirt mode
Note: on the webpage, the option paravirt is missing 'a'

  • -f /SolarisU.img

the guest will be in a file, the filename is SolarisU.img created in the root directory

  • -r 1024

the guest domain will use 1024M memory

  • --mac=aa:ff:bb:aa:28:16

the guest domain's network mac address is aa:ff:bb:aa:28:16

  • -s 8

the file size is 8G

  • -l /export/home/newbie/sol-nv-b87.iso

the guest domain will install from iso image

The rest is to follow the solaris installation process and it will automatically reboot and run the guest domain. Finally, you will see the guest domain is running as below. The guest ip is 128.153.144.100. So far, I didn't find anywhere to choose the guest domain network mode. The default is bridge mode. I didn't find where the virt-install has put the guest configure file.


Comments:

1) when creating paravirt guest, it will complain libvir: Xen Daemon error : GET operation failed:

2)Simple command # virt-install doesn't work.

3) The command for VNC doesn't work. It claims that authentication failed.

# svccfg -s xvm/xend setprop \
config/vncpasswd = astring: \"somepwd\"

# svcadm refresh xvm/xend; svcadm restart xvm/xend

4)virt-manager can be installed in xVM, but I can not have it to display the guest or create the guest. So I am doubt about it. Here are the instructions to install virt-manager in xVM

The page for virt-manager project is at http://opensolaris.org/os/project/jds/tasks/virt-manager/.
1)Download the binary package for SNV83 or later on the page http://opensolaris.org/os/project/jds/tasks/virt-manager/virt-manager-0.4.0-04-09.tar.gz.
2)Untar it and in the directory pkgs-04-09 and use the package installation command bash-3.2# pkgadd -d. SUNWvirt-manager to install.
Note: it has a dot right after -d and no space between them.
3) on the command line, type virt-manager, it will show up as below

TODO list:

1. Figure out where the configure file are stored by virt-install

2. Figure out how to run an existed guest

3. Make VNC works

4. Create HVM guest: solaris and windows

5. Create paravirt Fedora guest

----------------------------------------------------ADDON-----------------------------------------------------------------------------------
virt-install is a tool that works and can create both para and HVM domU images, but after OS installation and image creation, it doesn't provide a configuration file for the guest to run. So the user need to make up one for their own. Here is the one I am using and you can also refer to http://opensolaris.org/os/community/xen/docs/install-solaris-domu-iso.htm
and
http://blogs.sun.com/mrj/entry/installing_opensolaris_on
http://bderzhavets.blogspot.com/2008/03/install-solaris-0108-hvm-domu-64-bit-at.html\

a python configuration script pv.py

name = "solarisU"
vcpus = 1
memory = "1024"
disk = ['file:/solarisU.img,0,w']
vif = ['']
on_shutdown = "destroy"
on_reboot = "restart"
on_crash = "destroy"


Note: in the disk option, you need to specify where your guest image is.

To run the guest once you have the image and the configuration file, you can use the command below to boot and connect to it.
#xm create -c pv.py
If you wants to get out of the domU, you can use the combination keys ctrl+] (same as linux xen) to exit the guest. But if you already have the guest running, but outside of it, you can use the command below to get into it by specifying its domain name or domain ID.
#xm console solarisU