(Perhaps this section should be placed somewhere else in the hierarchy ...)
Have a look here: San Antonio Electronic Commerce Resource Center
European banks use it for collateral management, US banks use their competitor by the name of “...”. There is no other system that you can purchase. In such a situation you are obviously always their guinea pig.
“OTC Panorama” may look like a C/S (client / server) application. The two large and sole building blocks seem to be:
an Oracle data base with a legion of tables and stored procedures
a WinNT user interface
But the WinNT user interface is far, far more than that. Data far more than you can think are getting shifted between the data base and “the GUI”; we have to assume that the actual computational processes happen within “the GUI”.
Have a good laugh: the scripting language they make you use with “RiskWatch” is Visual Basic. Me being a 85% UNIX person still found it a good reason to invest a good amount of money in acquiring a batch of books on Visual Basic.
One way of feeding corporate data into RiskWatch is CSV files. But they were creative enough to invent an extra CSV file feature: The first field of a row contains either an empty string or an ID; the ID then defines the entire region of rows from the current to the next row with such an ID (actually excluding that row) to be of “type” (???) “ID”. Rows with such an ID in the first field contain the column names valid “for the time being”, the other ones the values themselves. This way we end up in inhomogeneous CSV files. I have not seen such a beast anywhere else. BTW: That's why we can't use perl's DBD::CSV in this context.
...
The Open Source community's system of choice.
Version Control with Subversion, the free online version in various formats -- my favourite book, although my first one was the “pragmatic ...” one
Version Control with Subversion, ISBN: 0596004486
buy this book on amazon.de
Pragmatic Version Control Using Subversion (V2), ISBN: 0977616657
buy this book on amazon.de
A few interesting topics (dealt with at least at svnbook.red-bean.com):
“resurrecting deleted items” |
“versioning symbolic links” (aka symlinks) |
expect the change indicators to get explained in the context of "svn status" [21] |
Visual SourceSafe (VSS) is a source code control system without outraging new features, but it is a quite mature and well-integrated[22] system. The only feature, I would like to point out especially, is its pinning. The scenario: you pin a version of a file, before you start with some major modifications, that you will check in one by one. But everybody, who gets a read-only copy, does not get the latest version checked in, but the pinned version. When you finally complete your modifications, your pull off the pin. Some URL-s:
the home page |
... |
Clearcase is the multi-platform successor of Apollo Domain's DSEE. Although I never used Clearcase myself, I was involved in the introduction of Clearcase into the development teams of a major german airspace company. Clearcase provides you with a real and `per user' file system view of the versioned software product components for some UNIX platforms and Win NT(?).
I was a user of it. Sun's NSE is based on SCCS (on the low level) and it provided you with similar capabilities like Clearcase nowadays.
VC is a mode for the GNU Emacs editor that provides easy control of SCCS, RCS, or CVS from within Emacs. I love VC, I need VC, I must use VC -- I wouldn't want to work without it any more.
The Open Source community's system of choice.
CVS Pocket Reference, ISBN: 0-596-00567-9
buy this book on amazon.de
Essential CVS, ISBN: 0-596-00459-1
Pragmatic Version Control Using CVS, ISBN: 0-9745140-0-4
Open Source Development with CVS, 3rd Edition, by Karl Fogel and Moshe Bar (also available through O'Reilly)
The Revision Control System is my choice for version control; I esp. like the emacs support for rcs. BTW everyone uses it: even IBM for maintenance of AIX (have a look at their include files!), ...; and IBM also sells you 24 h support for CVS+rcs.
Database languages comprise:
a Data Definition Language = DDL
a Data Manipulation Language = DML
a Storage Structure Language = SSL
...
I found quite a few links pointing to very different directions, abviously I didn't find the most interesting one first, so I will document to some extent everything I found. I assume, 11g will be the right path to follow, but I am not 100% at the moment.
...
Here you find the article:
http://en.opensuse.org/SDB:Oracle_installation
.
...
openSUSE 11.1 Ignore "libxcb: WARNING! Program tries to unlock a connection without having acquired a lock first..." from Oracle Universal Installer (OUI). Install the libstdc++33 package. ie: zypper install libstdc++33 This package is used by some of the makefiles when creating database instances. Oracle 11gR1 : Same as openSUSE 11.0 -> Ignore "libxcb: WARNING! Program tries to unlock a connection without having acquired a lock first..." from Oracle Universal Installer (OUI). You may need previous workaround : "export LIBXCB_ALLOW_SLOPPY_LOCK=1". You can learn more about this java issue @ Sun bug-6532373 -> http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6532373 Oracle 11gR1 : 1. Install openSUSE 11.0 with "C/C++ Development" selection. 2. Download and Install orarun package (http://ftp.novell.com/partners/oracle/sles-10/orarun.rpm). Enable and set password for newly created user oracle by orarun. 3. Change some environment variables - ORACLE_HOME, ORACLE_SID, TNS_ADMIN in /etc/profile.d/oracle.sh. 4. Set updated kernel parameters by executing /etc/init.d/oracle start or rcoracle start. 5. Download and extract Oracle Oracle 11gR1 SW (-> ...). http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html -> Oracle Database 11g Release 2 ... 6. login as user oracle and run Oracle Universal Installer "database/runInstaller". Just follow step by step questions of Oracle installer. Note: If you are on x86_64, please make sure 32 bit Runtime Environment is installed to avoid Oracle linking errors (this seems to mean that the 32 bit gcc package must be present). Alternatively if you don't want problems in the installation of oracle 10g or 11g on openSUSE 11.0 (64bit) you can use this script doris1.1d.sh (-> ...). This script will automate the setup by downloading from Yast dependencies, sorting out all the 32bit and 64bit libraries and linking where required. The purpose of this script is not to install Oracle but just to get the system ready for installation. Own risks policy applies. (root@localhost# sh doris1.1d.sh suse11 10g) -- Ade90036 12:59, 7 August 2008 (UTC)
...
Oracle Database 10g (Express Edition) …
I went for Oracle Database 10g Express Edition (Universal) on the server, and for Oracle Database 10g Express Client on the client(s).
root@HayekX #
rpm -i /usr/local/oracle-xe-univ-10.2.0.1-1.0.i386.rpm Executing Post-install steps... insserv: warning: script 'oracle-xe' missing LSB tags and overrides insserv: Default-Start undefined, assuming default start runlevel(s) for script `oracle-xe' oracle-xe 0:off 1:off 2:off 3:on 4:off 5:on 6:off You must run '/etc/init.d/oracle-xe configure' as the root user to configure the database.root@HayekX #
/etc/init.d/oracle-xe configure
I was not successful downloading the RPM using firefox on Linux ...;
maybe there is a hidden assocation .rpm
to the Real Player,
but finally I succeeded with the download using firefox on Windows.
This is one of the URL-s, that you will come across: Oracle Database 10g Release 2 (10.2.0.1) -- Express Edition for Linux x86
Another one: Documentation Library -- Oracle Database Express Edition 10g Release 2 (10.2)
If you need to change the configuration settings, then you can do so by running the following command:
$
/etc/init.d/oracle-xe configure
To start the database manually, run this command:
$
/etc/init.d/oracle-xe start
To stop the database manually, use the following command:
$
/etc/init.d/oracle-xe stop
Slightly changed by me for better readability:
source /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin/oracle_env.sh
I the directory /usr/lib/oracle/
does not exist beforehand,
this will led to a nasty looking shell syntax problem:
$
rpm -ivh oracle-xe-client-10.2.0.1-1.0.i386.rpm
so create that directory before executing it the rpm-command.
Slightly changed by me for better readability:
source /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/bin/oracle_env.sh
To configure the connection to Oracle Database XE Server, refer to “Oracle Database Express Edition 2 Day DBA”.
...
“To access this manual and the rest of the documentation set, click Documentation under External Links on the Database Home Page:”
Installing the Database and Getting Started -- Installation Guide for ...
Installing the Database and Getting Started -- Getting Started Guide
Administering the Database -- 2 Day DBA
Administering the Database -- 2 Day Developer Guide
... and my category of experience with it: network: CODASYL (SNI's BS2000 UDMS)
DDL, DML
embedded (COBOL)
interactive
Of course, perl-DBI is not a database itself, but it's the preferred way to access any kind of relational databases in perl.
Using the DBI (and DBD -- the `I' in DBI relates to the db independent part, the `D' in DBD to the db dependent part) module. I implemented within about an hour a complete bulk copy like utility, that can copy one entire table from any (supported) database to another table (with the same columns) of any other (supported) database, and CSV files are regared as just one other (though very simple) database -- using the DBD::CSV module. Of course I keep polishing this utility once in a while.
DBI comes with dbish, a vendor independent SQL shell, using e.g. GNU readline.
I like DBD::CSV
for working on CSV files.
I do find it a little annoying,
that I can't give a CSV file a name like table.csv
as a table name,
but than, that's the nature of the thing: table.csv
is not an SQL-ish table name.
So I either have to symlink table.csv
to something liketable
,
or I have to do something like this:
$dbh->{'csv_tables'}->{table} = { 'file' => 'table.csv'};
In the context of my dbi_utils.pl
(see below!) I usually resort to renaming the file,
so that it looks like a proper SQL-ish table name.
The default EOL convention is CR+LF
,
but that may certainly be adopted,
just that it is a common pitfall for me not to look at the line ending before loading a file.
In common you think, that if you properly separate columns, than it shouldn't matter whether you got blank characters embeded in the value of a column.
Column values with embedded single quotes apparently prepare the Oracle interface headaches,
so column values best get enclosed in double quotes, if they aren't already.
The easiest way to achieve this sufficiently seems to filter the CSV file through something like this
(see below for my dbi_utils.pl
!):
$
.../dbi_utils.pl \
--job_copy \
--source_dsn "dbi:CSV:f_dir=$PWD;csv_sep_char=|" --source_table gat_200303271310_unquoted \
--dest_dsn "dbi:CSV:f_dir=$PWD;csv_sep_char=|" --dest_table gat_200303271310_quoted_properly
My script dbi_utils.pl
can do a copy
from whatever DBD driver supported interface
to whatever DBD driver supported interface.
E.g. you can copy from an Oracle to an Oracle table,
from an Oracle table to a Sybase table,
from a CSV file to an Oracle table,
etc.
Microsoft Office's Document Imaging can convert faxes into text
ImageMagick -- a bundle of utilities
ImageMagick provides a suite of commandline utilities for creating, converting, editing, and displaying images:
display is a machine architecture independent image processing and display program. It can display an image on any workstation display running an X server.
import reads an image from any visible window on an X server and outputs it as an image file. You can capture a single window, the entire screen, or any rectangular portion of the screen.
montage creates a composite by combining several separate images. The images are tiled on the composite image with the name of the image optionally appearing just below the individual tile.
convert converts an input file using one image format to an output file with a differing image format.
mogrify transforms an image or a sequence of images. These transforms include image scaling, image rotation, color reduction, and others. The transmogrified image overwrites the original image.
identify describes the format and characteristics of one or more image files. It will also report if an image is incomplete or corrupt.
composite composites images to create new images.
conjure interprets and executes scripts in the Magick Scripting Language (MSL).
why not let Microsoft Outlook be the frontend instead of a Lotus Notes client?
my separate article on using GPRS under Linux
Respectlessly not regarding the (existing!) world outside the Internet hemisphere, I want to discuss here how to access the points of presence (PoP) of Internet service providers (ISP) and how to bridge the gap between switching on your modem and finally connecting to the PPP server (let's restrict it to this here!).
Yes, this title doesn't strictly follow the usenet hierarchy, if you can suggest a more appropriate one, pls come up with it! For some time I regarded this topic quite close to alt.internet.access.wanted, but ...
Chat scripts for dial-in through modems are needed in a lot of contexts, amongst others for UUCP, SLIP, PPP, ... . Some of them end immediately after the modem connect, some of them do a lot more.
The reason why I am interested in iPass is, because my main ISP (T-Online) uses iPass's worldwide network of PoPs and authentication system outside Germany. But apparently iPass also cooperates with a lot of other ISPs around the world, so many others may also be interested in this topic.
iPass has (and provides you with) a huge table of PoPs (tpb.txt
),
specifying phone number rules, and what dial-in script to use.
Certainly, this is another realm for Babel, i.e. another OS platform, another dial-in script language;
iPass only provide you with dial-in scripts for use under Windows
(scripte.scp, scripti.scp,
scriptm.scp, scriptn.scp,
scripts.scp, scriptu.scp)
and also some for use under the Mac.
Excursion# 1: Once I'm dialed in through iPass's PoPs T-Online still thinks I am on hostile grounds, so they won't let me do neither POP3 nor SMTP to their servers, which is a real pain. You may also have experienced this, and suffered a lot. But these days you still have more alternatives, and one of my alternatives is an ISP supporting SSH connections, tunnels, i.e. port forwarding, so I can at least "sendmail" and "fetchmail" through them. Example:
ssh ...
Excursion# 2: The almost unaccessible T-Online POP3 mail box can actually also be accessed through a web mail interface, which I had to script in order to make it usable efficiently.
Standard stuff, not discussed here -- apparently under Windows they use yet another Visual Basic extension for dial-in scripts.
Traditional UNIX dial-in scripts (for UUCP!) were written using the chat(1) utility. I think, it wasn't used a lot when (during the pre-PPP times) SLIP was fashionable, but chat got in use again with the rise of PPP.
Primitive chat scripts are quite easy to write, but if you attempt to deal with error situations, they obviously get complicated and even pretty soon unreadable. The man page even says: “In actual practice, simple scripts are rare.”
There was a time in the 1990's when Tcl got pretty fashionable, and as Tcl was thought of as being easily extendable, Don Libes published a Tcl extenstion by the name of expect, which is pretty comfortable to use for implementing chat scripts, far more than the original chat utility.
The working environment for what I am treating here is a Linux with Paul Mackerras' pppd.
At the time of this writing I just rewrote iPass' window-ish *.scp
scripts for use with the chat utility.
Frankly spoken I actually left out all retry situations in order to just get them working.
My wish is to further rewrite the *.scp
scripts in expect,
which shouldn't be a big deal ...
So I prepared files ipass-chati
.chat:
i
being e
, i
, n
, s
, u
;
I skipped ipass-chatm.chat, as the tbp.txt I found does not make use of it;
ipass-chatn.chat (for use with NTT, Japan)
always reports a Remote Authentication server timeout,
so I cannot seriously confirm it works, but T-Online support contributed in a newsgroup, they observed that as well with the standard scripts
And I prepared sample pppd options files.
E.g. an options file to connnect to a PoP in Bolivia is called ipass-scripte-591-bo
,
and the command line to execute looks like this: pppd call ipass-scripte-591-bo.
To get running you would also need a "standard" pppd options file ipass-options
;
I currently do not distribute one,
as I don't like my own one -- but honestly a pretty simple options file will do.
You would also need your modem set-up string; I gained mine by running wvdialconf.
You would have to add appropriate entries to your /etc/ppp/chap-secrets
and /etc/ppp/pap-secrets
.
My TAR ball can be found here: http://www.ACM.org/~Jochen_Hayek/dl/comp.networks.dial-in/ipass-ppp-chat-scripts.tgz
I am ready and willing to discuss (and sometimes even to incorporate) changes and improvements.
I appreciate, if people tell me about their experiences with this software.
Legalese: ... (IMO this software is far to thin to think about legalese.)
You can configure it to go through different kinds of gateways, and it can also speak sftp, that is SSH's FTP look-alike.
I will refer to gFTP's special way of configuring how to pass gateways. First you have to fill out the fields under
, .Table 1.3. gFTP and its configuration specifiers
%hh | remote host | %hu | remote user account | %hp | remote user account password | %pu | gateway user account | %pp | gateway user account password |
Novell's NetDrive
dmoz@
fetchmail is the utility I use to download my e-mail.
I successfully tunneled fetchmail IMAP access through corporate networks' proxy servers,
and fetchmail can also work fine with socks5 proxy servers.
But as unexpected cooperation between fetchmail and socks cost me once quite a lot of hours,
I suggest you place something like this in your ~/.profile
:
export SOCKS_CONF=/dev/null
until you really, really want to go through a socks5 proxy server. Otherwise you might experience trouble at the most inconvenient time. The socksification applied within fetchmail is nothing you can specify through a fetchmail runtime or RC switch, so that env. var. setting is the only way, that can save you from damage resp. frustration. In March 2007 I instantly begged the fetchmail maintainers to change the default setting within fetchmail, but they were not open to my advises.
Outlook is more than a mail client, I regard it a personal information manager.
The CSV file import / export translators use auxiliary config files in c:\windows
,
which are most of the time quite unhelpful;
they keep the state of some import / export processing and apply it to new processing actions as well (i.e. applicable columns);
the Microsoft knowlege base suggests to delete them (when they seem to disturb the process, they say somehow ...).
There are customers, that employ a IBM Lotus Domino server, that you would usually talk to with a IBM Lotus Notes client. Having used Outlook for quite a while, I never started liking the Notes client.
But as IBM and Microsoft are nice companies and wish the best to their common customers, they talk to each other and develop connectors to each others' software. One outcoming of that is, that Microsoft developed software to let Outlook 2002 talk to IBM Lotus Domino R5 server. They call it Microsoft Outlook 2002 Connector, but it connects just Outlook and the Domino server. You can get that software for free from Microsoft.
You still have to locally install a IBM Lotus Notes client. (IBM Lotus provides you with a free trial version.) Within the Outlook plugin Microsoft makes use of the libraries that come with the Notes client.
By default you have access to your space on the Lotus server,
but under outlook:\\..
address,
not as an Outlook e-mail account of its own.
Bookmark the address as a favorite and use it from there within Outlook (IE has no clue, what to do with such a bookmark),
as otherwise you will have to enter the address in the address field yourself!
In about:config
you can manipulate a few internal settings:
network.prefetch-next : false # do you want to get possibly next web pages prefetched?
Do you want to get ping links marked?
For your Linux Firefox (create+)edit you ~/.mozilla/firefox/default.pqw/chrome/userContent.css
and insert this:
a:hover[ping] { -moz-outline: 1px solid green; }
This way ping links get a green frame, as soon as the mouse pointer touches them.
en.wikepedia.org
de.wikepedia.org
LEO english-german
PONSline english-german from www.pons.de
for myself: ~/Computers/Software/Internet/Servers/Mail/Sendmail/
For setting up sendmail or postfix on SuSE Linux,
they have nice yast GUI modules.
Still it's worth having closer looks at /etc/sysconfig/postfix
resp. /etc/sysconfig/sendmail
,
as far from everything can get set up through the yast GUI modules.
And whenever you change something manually in these files,
do not forget this afterwords: SuSEconfig --quick --verbose,
and also stop and start the respective rc, like rcsendmail or rcpostfix.
On the GUI:
Of course you have to get the outgoing mail server right.
On the Masquerading panel I leave the field Domain for the 'From' header empty, in my case entering something here only leads to trouble.
Also on that panel, there is a field Domains for locally delivered mail.
In the case of postfix I find a reasonable suggestion for that field in /etc/sysconfig/postfix
,
and if I assign that suggestion to the variable POSTFIX_LOCALDOMAINS
,
the yast GUI module refuses to work with that string, as it is postfix variable interpolation, which does not look like a hostname or domain name,
which is what the strict rules of the yast GUI module expect.
In the case of sendmail I enter the reasonable values in /etc/mail/local-host-names
,
and that even includes something like localhost.MY_LOCAL_DOMAIN
.
Still on that panel I entered a rule line for my very one local user to my valid world e-mail address.
On the Incoming Mail panel I forward root's mail to my very one local user,
and in the case it let's me set the Delivery Mode to ,
but still for postfix I will have to set up the famous ~/.forward
entry referring to procmail,
otherwise everthing goes to my local user's standard system mail box.
After a lot of confusion and trouble before this simple recipe finally solved my sendmail problems. Actually I switched back and forth between sendmail and postfix a few times, to understand what the yast GUI module and the sysconfig files really mean, and what purpose the single entries serve. That really helped!
TransConnect, also simply referred to as tconn, provides you with transparent TCP access through a proxy. It intercepts all connections to non-local destination and “does the right thing” with them making use of the proxy's PROXY CONNECT capability, that is being made use of for https access. But https access is despite of the mis-leading name not just for ssl'ed http access, but for any connection to be passed through the proxy. Obviously every browser supporting https includes a module like transconnnect, but transconnect is the only such module available separately.
get CGIservlet.pl
here
-- a real tiny web server including CGI.
The downsided is, that it runs one CGI script only on one port at a time.
You can still run more instances on other ports.
If you really want to do a little CGI programming, but ...
your lovely system administrators don't run Apache yet,
or they want you to install Apache,
or they just don't want to run your CGI scripts on their web server,
or ...
-- they will hardly detect this one.
Using it really gave me a boost after quite a while of frustration regarding (my simple of writing) HTML forms with CGI backends for crontrolling some of my production processes.
PATH_INFO
and PATH_TRANSLATED
of a stateful script (using CGI.pm
)
get the SCRIPT_NAME
appended over and over and over again.
That gets reflected within the web browser's location text field
and looks rather strange but actually with no negative impact,
as CGIservlet.pl
does not make us of the above environment variables for finding the script to be executed,
as it always only executes that one script,
that it was started with.
If you take a simple example as time4.pl
from Lincoln Stein's book on CGI
(you find it in time4.txt
),
you see the effect immediately.
I debugged the code for a short while (to no success)
and found that the SCRIPT_NAME
keeps getting appended to PATH_INFO
and PATH_TRANSLATED
.
CGIservlet.pl
does seem a bit tricky and complicated though.
No, I don't think that the problem is caused by CGI.pm
.
As you can see,
time4.cgi
runs here with no such problem.
I can somehow fix the problem,
if I use CGI.pm
's start_form
with -action
=>
''
or -action
=>
'something'
[23],
but not with its default value,
which is rather a pity.
#!
-line of your script is being ignored
That's not troublesome at all, if you are aware of it.
And that's because of the way, your CGI script gets called by CGIservlet.pl
,
that's via a perl do-construct.
I do not agree with CGIservlet.pl
's author's advice of symlink-ing
the real perl's absolute file path name to CGIservlet.pl
's installation directory (“startup directory”)
and to also put your CGI script there.
But instead I agree to his (actually his first) advice,
that you make it somehow obvious, which perl you want to use,
by simply changing CGIservlet.pl
's own #!
-line
to point to your favourite perl binary.
(I am sorry, maybe it sounds nicer to first agree and than disagree, but ...)
Yes, most of the time, there is only one perl installation on a machine, but consider the case of a pretty modern high-end Solaris machine coming with its perl installation, that you find far too old and unsupported modulewise by your system administrators. So you will go and install perl yourself, if they let you and / or don't find out, that you do so. And you want your CGI script to use your perl, I assume. That's the context I am living in.
An experience I think I must let you know: Use this line in your script, so that all calls to warn, die, carp, ... get formatted nicely into what gets sent to the browser:
use CGI::Carp qw(fatalsToBrowser);
rsync
GNU wget
curl (doesn't really belong here, but ...)
pavuk
What rsync has over wget and curl, (although they obviously don't cover the same protocols -- curl esp. doesn't even support mirroring) is that if rsync gets interrupted, it does not leave a corrupted (i.e. imcomplete) file. Right, it may be nice and comfortable to be able to resume downloading a huge file, whose transfer got interrupted in the middle. But there should be a at least a switch to tell the utility to not leave incomplete files.
I seriously need that feature, as I do have a spooling environment, where one class of spoolers is responsible for mirroring web server directories into local directories (the download-spoolers) and another class of spoolers picks up newly arrived files for further local processing (the comparison-spoolers[24]). Where rsync does the mirroring, I got no problem, because when rsync completes successfully, the files are complete. Actually both of them must be put in a loop, as they can get interrupted in the middle of a transfer. The solution may be to regard the “looped wget” (i.e. wget in a loop) a critical region, during which the contents of the directory, into which wget downloads, may not be taken as valid. A certain phase within a comparison-spooler is also a critical region, that is, when it takes a copy of the TOC of the local mirror. The comparison-spooler may also even decide to remove a corrupted file in the local mirror. But what if the original remote version is the source of the problem? The download-spooler would end up downloading the file endlessly ...
wget is especially good for recursive downloading from HTTP and FTP servers.
In case you have an HTTP proxy between you and such an FTP server,
all you get is an HTML listing of the directory you specified.
I tried hard overcoming that problem
by instructing wget to consider that HTML listing an HTML file (--force-html
)
and esp. by considering that HTML listing the --input-file=...
and by using that specified directory the --base=...
,
but wget gets it quite wrong and assigns pretty strange file names out of the context of that directory to the files downloaded.
You actually have to do a little scripting ..., e.g.:
curl --silent ftp://ftp.atdd.noaa.gov/pub/dumas/software/batstore/batstore-5.6.8/ | ruby -ne 'if($_ =~ /^<A HREF="([^"]*)"/) << ($1 != "../") ; print $1,"\n"; end' | while read f; do echo $f:; wget --mirror ftp://ftp.atdd.noaa.gov/pub/dumas/software/batstore/batstore-5.6.8/$f; done
From the code I showed you, it's not necessarily obvious to you, there is a proxy in the middle, and if there is none, this code does not make sense.
“The Art Of Scripting HTTP Requests Using Curl” by curl's father Daniel Stenberg (a very remarkable article)
I do manual and scripted downloading and also uploading (including serving HTML forms and CGI script front-ends) with it over ftp, http and https.
You will hardly ever use ftp again, once you got used to curl and wget.
Daniel Stenberg's formfind.pl is a very valuable help in analyzing HTML forms.
I do manual and scripted mirroring and also “simple downloading” with it over ftp, http and https.
You will hardly ever use ftp again, once you got used to wget and curl.
From a first glance at it, it looks to me as it does mirroring through rsh and ssh (like rsync does).
I do mirroring and also “simple copying” with it over rsh and ssh.
You will never use rcp and scp again, once you got used to rsync.
The protocol is called Remote Desktop Protocol or RDP for short. There is some UNIXoid Open Source Software called rdesktop acting as RDP client under X-Windows.
This is how I found it convenient to call the utility:
rdesktop -D -a 24 -k de -r sound:local -g workarea -u johayek 10.0.4.116
I would actually have liked to effectively use -a 24
, but I don't get beyond 16 bpp, even together with -C
.
I still do use -a 24
and accept the following warning,
so I just get reminded, that I don't use -a 16
, although I actually prefer true 24 bpp:
WARNING: colour depth changed from 24 to 16
I have problems getting all keys mapped correctly,
e.g. I never manage to get the circumflex
key on a german Windows keyboard right.
Here are a few hints to follow, once I'm going to have another try to fix this problem:
xmodmap -pk
xkb
xkbcomp
/usr/X11R6/bin/xev
resp. /usr/openwin/demo/xev
-- xev shows, that the key is known on Windows as dead_circumflex
...
...
Always, always execute vncserver like this in such an environment:
$
vncserver :2 -nolisten local
Although /tmp/.X11*
is not being used then,
Xvnc
still knows how to start new independent X-servers :2, :3, ...
with different and non-overlapping sets of files.
Perhaps you wonder, why it appears here, but I consider it to be a kind of network, and the future will prove me to be right, as you will recognize. My favourite fax package is mgetty+sendfax. (Although the name is sendfax, I use it for sending and receiving faxes. I even constructed a mail2fax gateway using this nice package.
For some reasons - at least in Germany - it's a must (telebanking, ordering my railway tickets ...). In the past its terminals (Btx-decoders) were only little more intelligent than VT100 terminals.
Many commercial applications use the programability feature of these terminals, even MS-Money does, so this kind of interface will / can never go away (`pc gateway'). I once implemented an automatic communication service for a CEPT terminal, I called it an input feeder, because its task was to simulate the input to be done by a human user. I also wanted to extend it then, so that it could also extract certain specificied areas of `subsequent' pages; I'm going to do that using Tcl with the CEPT terminal package I currently use. Nowadays the (UNIX based) CEPT terminal (emulation) of my choice is XCept. (version 3.0 uses Tcl/Tk), to be honest, I don't know of any other one. This is where you could get XCept-4.0 from (you will have to pay where little money for it!): www
The sources of XCept-3.0, which I had used for a long time, were freely available. I implemented quite a few add-ons in this framework for my personal use of T-Online.
You can communicate between two computers coupled via internet, ethernet, serial (or parallel?) line, or a modem, you can always use kermit from the University of Columbia. Whenever e.g. I couple a UNIX and a DOS or Windows machine via serial line, I let the non-UNIX machine run a kermit in server mode. That's most easily installed, but quite effective! Obviously, it's one of my favourites!
Have a look here:
for virtually every platform, even some archaic ones.
I am a personal supporter of the Kermit Project, I bought that book.
See also:
GUI toolkits (should) come with GUI builders.
I used[25]:
Sun's DevGUIDE
the Siemens-Nixdorf OSF/Motif GUI builder
SpecTcl for Tcl/Tk (and it also works for perl, Java, python, and ruby)
programming: course only; usage: (of course)
OW itself is quite aesthetic to look at.
xview is really a quite tiny but very powerful programming interface.
I got good programming experience in xview.
It's the lovely part of Tcl/Tk.
It can also be used embedded into perl and Scheme (-> STk, GUILE) (I would prefer that, but because of it's history, Tcl itself is always 1st choice), ...
After starting with some autodidactic knowhow, meanwhile I coded a GUI for an EDI communication System with Tcl/Tk. And since 4.1 Tk has had grid oriented geometry management, and a GUI builder in common with Java called Spec{Tcl,Java} (local link only!). Have a look at uptodate information! Spec{Tcl,Java} itself uses an easy to install Tcl/Tk extension called Blt for gridded geometry mangagement, and the generated Tcl/Tk code[26] depended on that extension package, too. Meanwhile Tk got a new builtin geometry management for that purpose, and at so far the generated code can run on all vanilla Tk4.1 platforms. Hopefully the same transition also happens to Spec{Tcl,Java} itself.
See @pxref{java}.
John Bradley's XV, see “John's World of XV”
fig, xfig and associated software
metacity (my favourite after abandoning Enlightenment)
Enlightenment window manager (my favourite after abandoning KDE)
Devil's Pie (you can't use metacity w/o it)
Devil's Pie's initial (> 0.13) only documentation was it's src/parser.c
Devil's Pie -- the wiki started by the author of Devil's Pie himself
Devil's Pie -- another wiki, at some stage far better than the author's wiki
I am a user and an administrator of MS-Windows up to WinXP (which I like more than I admit to myself).
There is really a whole lot to say about operating systems, I just started with a list ... I will treat only families of operating systems, e.g. I will not distinguish all those UNIX flavours on this subsection level, but only one level deeper. But before I start with those families, I'm going to consider certain main components of operating systems. I'm trying to complete that list during this decade :-)
Some programming exercises of my computer science introductional courses were to be done on a Burroughs host.
Did you know, that Siemens bought the licenses from RCA? Don't you laugh about that!
I never used CP/M itself, but its MC68K clone called Atari TOS and its Intel clone called Microsoft DOS. I got experience as a user and with system administration. It was developed by Digital Research.
It's also called OSIRIS. It was a very, very good idea, and it was more than an operating system. They even had a dedicated CPU, I think the i960 is it's little brother, means missing out the capabilities stuff and some of these things. I loved it deeply. It was a project conducted by Intel and Siemens, mainly in Portland, Oregon. During a late period a common daughter called BiiN was founded. It was not on any operating system main stream. There was some kind of UNIX Sys V R3 (?) emulation. Some Siemens managers killed it during some yearly reorganization driven by some famous managing consultant company ..., that time concerning the information technology divisions of the different Siemens enterprise divisions (`Unternehmensbereich'). It was my reason for going to Nürnberg (@pxref{Nuernberg}), and it was my reason for going to Berlin (@pxref{Berlin}).
Thank you, Linus, and all the brave folks contributing to it! Well, of course, my PC-s run Linux with loadable modules and the kernel daemon, that is able to load modules on demand, so you start with a really minimal kernel at boot time, and all components necessary only later, will be loaded only later - and also unloaded, as soon as they are no longer being used. Well, some of you might share my impression, that because of the nature how Linux is being constructed and esp. contributed to some parts of it are hopelessy outdated. Esp. the treatment and coverage of soundcards is quite appalling. But luckily enough (and I really don't mind it), meanwhile you find commercial support for Linux.
If you do a system upgrade, you always experience some sort of trouble. I want to describe my trouble here, maybe it will help somebody. Probably I will need the descriptions myself another time -- during the next upgrade.
One area that is particularly important to observe is demand dialing. There is always something changed with a new release. Nowadays I use a hardware NAT/firewall router in my home office connecting me with a flat rate via DSL, which all makes my life a lot easier, but when I'm out of town I'm linked through a GPRS mobile phone / modem, but also there (because of the nature of GRPS) I stay connected for just as long as necessary. And also even my local PPTP VPN connections are a sort of dial up connection, as it works through PPP with a MS specific authentication (MS-CHAP) and with yet another encryption.
Generate certificates as described in /usr/share/doc/packages/imap/README.SuSE
!
Certainly, /etc/inetd.conf
must have a proper imaps
entry.
The IMAP server uses /etc/cram-md5.pwd
as authentication data base.
That's it.
Starting with this release the source RPM-s don't come on the DVD any more, but apparently on one of the companion CD-s. I usually don't bother carrying these CD-s with me, as so far the DVD was good enough for me.
For my e-mail environment behind a firewall a need at least a certain setting as justified in /usr/share/sendmail/README
:
If you are inside a firewall that has only a limited view of the Internet host name space, this could cause problems.
The name of the setting: accept_unresolvable_domains
.
This gets only set through /etc/sysconfig/sendmail
's SENDMAIL_EXPENSIVE
,
as I found by browsing /sbin/conf.d/SuSEconfig.sendmail
.
There are people and companies, that keep sending me e-mail messages,
that I can only read using Outlook,
and as I actually like Outlook pretty much,
I do not complain, if I have to use it.
My mailbox files reside in $HOME/Mail/
,
and that's where I let my IMAP client access the mailboxes.
Starting with this SuSE Linux release,
the provided UWash imapd stopped serving plaintext authentication,
instead it offers CRAM-MD5 (/etc/cram-md5.pwd
) authentication,
but as Microsoft Outlook does not support that,
and basically SuSE left me alone with this problem,
I did not get my easy IMAP set up working again.
After desperately surfing around for a while, I was lucky enough to find this article: Linux imapd with SSL quick howto. The description is just perfect, and I only have to add a few tiny bits.
E.g. 2 more "development" RPM-s are necessary for compilation: pam_devel and openssl_devel.
I wanted to recompile the imapd sources from SuSE's patched source RPM, but for reasons described above, I didn't have the CD with the sources with me, so I downloaded imap.tar.Z from its home server.
Obviously the directories, where things are expected, are a little different, so I had to create an extra symlink before the first start of the new imapd, and the make command line reads a little different:
ln -s /etc/ssl/certs /usr/local/ssl/certs make lnp PASSWDTYPE=pam SSLTYPE=unix EXTRACCFLAGS=-I/usr/include/openssl
Otherwise just follow the article quoted above, and you will be fine -- and ready very soon!
Thanks a lot, Shane Chen, for my SSL enabled IMAP communication!
GNU emacs has substantial changes for National Language Support.
This broke my german keyboard / display for "AltGr-..."-combinations and umlauts.
It is actually much simpler now,
but it took me a while to figure out, what to delete and what to add -- and this is just about the entire difference to a comparable U.S. set up:
basically I removed just about everything related to ISO-8859
and set the customizable option current-language-environment
to German
.
This is actually amazing.
This is certainly good, but I wasn't prepared to it and I did not want to spend the time improving my scripts that make use of curl (like automated web-mailing and my web-banking account statement download).
Luckily I found the command line option --insecure
,
that allows for doing without server certificate check.
This probably already applies to releases earlier then 8.1, but I spent time on this, when I migrated to 8.2, so ...
If you employ crypto file systems, you already know this:
at boot time (in SuSE speak: runlevel B) /etc/cryptotab
gets processed and crypto file systems get mounted.
But for mounting them, passwords are needed.
So remote rebooting a machine with crypto file systems is a mess.
My simple approach to this is to just remove boot.crypto
from runlevel B,
that means from /etc/init.d/boot.d/
.
No big deal at all, but effective.
Alright attaching and mount hard disks through a Firewire port is easy, but once you logically unmounted such a device, how do you tell the kernel to properly let go devices from a Firewire port?
root@HOST # rmmod sbp2
So, you removed the kernel module, but you didn't really detach the device. Now you want to get re-mount the device, but the OS tells you:
mount: /dev/sda1 is not a valid block device
That's because the kernel module is not loaded, and because the kernel is not triggered to load it just by the logical mount attempt. The usual trigger event is to physically attach the device. But we know how to get the kernel module reloaded, it's similar to the removal of the module:
root@HOST # modprobe sbp2
This is not only about PCMCIA but also about CardBus.
Our Dell Inspiron 7500 does not support USB-2.0, but I am seeking to attach a big fat external hard disk through USB-2.0 (or should I pursue the firewire track?), so I thought the way to go is to get a CardBus-to-USB-2.0 thing (timestamp: 2003-07), problem is obviously, that it's run under a Linux, currently SuSE-8.2 coming with Linux Kernel Card Services 3.1.22.
Which cards are appropriate for this purpose?
One article said, that “Hotplugging has always been an issue with cardbus according to the "PCMCIA Howto"”, but actually these cards are PCI bridges and already recognized by the kernel. Maybe the card should only get inserted after booting.
http://www.qbik.ch/usb/devices/search_res.php?pattern=cardbus
(this is where I found the 2 cards listed below)
Adaptec's USB2connect for Notebooks
Adaptec USB 2.0 CardBus 2-Port EU f. Notebooks (inmac.de: EUR 44)
ADS Technologies' USB Port For Notebooks (USBX-501) (they have cards of different card types)
ADS USB 2.0 Turbo CardBus ctlr (USBX2001EGS) (inmac.com: EUR 49)
Belkin F5U222yy (inmac.de: EUR 96)
Belkin F5U222{df,} (inmac.de: EUR 69; tendi.com: EUR 53)
I found an article in comp.os.linux.portable
saying the author would make use of a NEC USB-2.0 card with exactly our combination 7500/8.2.
How to do a filesystem check on an encrypted partition?
##$ modprobe loop_fish2 # this may be important!!! -- the old version ##$ losetup -e twofish /dev/loop7 /dev/sda1 $ modprobe twofish256 # this may be important!!! $ losetup -e twofish256 -H sha512 -C 100 /dev/loop7 /dev/sdb2 $ e2fsck /dev/loop7 $ losetup -d /dev/loop7
My current most favourite operating system. I got experience as user, programmer, script writer, system administrator. I used several of those many flavours: à la AT&T, à la BSD, à la Linux, AIX. Let's talk about derivates:
SVR4 is currently owned by the Santa Cruz Operation (`SCO').
SCO is proprietary ...
but wait! SCO is the current owner of SVR4, so be optimistic with SCO! it's going to be SVR4 - great!
AIX-3 is proprietary shit.
AIX-4 seems to be a mixture of OSF/1[27] and SVR4 - quite ok.
Solaris2 is SVR4 - great!
OSF/1 - not quite uptodate UNIX; OSF stopped work on it and recommended officially to its members, to use the one and only SVR4.
The first command language I used for writing scripts. I still prefer it to the C shell. I'm quite familiar with it. For small things I even prefer it over perl.
the borne again shell - and ksh - the Korn shell - are nice derivates, knowing how to edit the history, ...
the first UNIX shell with true command line editing features; you were able to use emacs-like editing or vi-like editing; its extensions to its ancestor (Bourne shell) are quite strange ..., though sometime useful;
is a derivate I got acquainted with only recently (in January 1996).
It is the 1st UNIX shell,
that knows about programs' parameters,
which is a most valuable feature.
Of course it knows those parameters and their types
only if you declare them.
also has the spelling error correction feature,
that tcsh first had.
other nice features:
associative arrays,
reverse subscipting
(that's actually a table lookup you will like to use for implementing sets),
...
I started using zsh in a production environment,
when I couldn't do without associative arrays any more.
A while later I saw myself forced to make use of its option NULL_GLOB
,
as otherwise I got an exception raised, that aborted the running function:
“
If a pattern for filename generation has no matches,
delete the pattern from the argument list instead of reporting an error.
Overrides NOMATCH
.
”
If I remember that right,
in past shells I did not get such an aborting exception raised.
Well, I dunno, why people write scripts in C shell, it's an awful programming language - full of syntactic restrictions. I admit, I know how to use it seriously, but you should avoid writing C shell scripts for the rest of your life. It was a step forward once, when it introduced the command history, but there are other shells, that do it better. At least one derivate of it - tcsh - knows how to edit the command history, and - might be a nice feature - it knows how to correct spelling errors, but do you really want that feature?
The perl module WebFS::FileCopy is the one with the most general approach amongst them: [Get, put, move, copy, and delete files located by URIs.] I do regard it as a shell utility, as you can use it for shell one-liners because of its compactness. Apart from its overwhelming definition it practically seems to be a bit unoperational for me right now (June 2000).
But curl and wget are really functional, and esp. curl also supports ftp file upload to a URI [30], but I do love wget also for a lot of features.
find . -type l | while read line; do ref=$(readlink "$line"); test -e "$ref"; echo "{$line}=>{$ref} // $?"; done
You can certainly do the following solely using find(1) , but what if it gets a little trickier? Takes a file listing on stdin.
find . -maxdepth 1 -type f | ruby -r etc -ne 'chomp; st=File.stat($_); printf "%s , %d , %s\n",$_,st.size,Etc.getpwuid(st.uid).name'
find files such as that the listing can be sorted by ...
This time I start searching in my home directory.
Obviously I'm not interested in files in the netscape,
and in RCS
subdirectories,
so I better ignore them.
Maybe you want to extend that list.
$
find . \
-name '*.etf' \
-printf '%T+ '\
-print |
fgrep -v /RCS/ |
sort
$
find . -type f -printf '%T+ ' -ls | sort
$
find . -type f -printf '%s:' -ls |
perl -ne 'm/^(\d+):(.*)$/ && printf "%012.12d:%s\n",$1,$2' |
sort -r
or just simply:
find . -type f -printf '%s:' -ls | sort -rn
$
find . -type f -printf '%p:' -ls |
sort |
perl -ne 'm/^[^:]+:(.*)$/ && print $1,"\n"'
$
find . -empty -prune -or -type f -printf '%p:' -ls |
sort |
perl -ne 'm/^[^:]+:(.*)$/ && print $1,"\n"'
$
cd /media/_ARCHIVE_1/home/a.b/music/$
find ? -name Folder.jpg -printf '"%p","%kkb"\n' | sort | less
$
(echo '"i-node#","size in kb","permissions","no of hard-links","user","group","size in bytes","mtime","name","symlink"';
find . -printf '"%i","%k","%m","%n","%u","%g","%s","%TY-%Tm-%Td %TT","%p","%l"\n')
this one would help if you want to sort files by size, age, and so forth:
$
(echo '"i-node#","size in kb","permissions","no of hard-links","user","group","size in bytes","mtime","name"';$
find . -type f -printf '"%i","%k","%m","%n","%u","%g","%s","%TY-%Tm-%Td %TT","%p"\n')
this would then sort (desc.!) the output by the sixth field (starting after the quote character) (size)
$
sort +6.1 -n -r -t, ~/tmp/find.csv > ~/tmp/find-by-size.csv
cut off all smaller than eg. 1 GB
this would then sort (desc.!) the output by the seventh field (mtime):
$
sort +7 -r -t, ~/tmp/find-by-size.csv > ~/tmp/find-by-mtime.csv
cut off all younger than ...
this would then sort (desc.!) the output by the sixth field (name):
$
sort +8 -t, ~/tmp/find-by-mtime.csv > ~/tmp/find-grouped.csv
I found these data through de.msn.com
, Money
, Börse
, Devisen
, Crossrates
,
and I thought I would create another nice oneliner from it:
$
curl --silent 'http://forex.focus.msn.de/?node=crossrate§ion=1' | ~/Computers/Programming/Languages/Perl/use_HTML-TableExtract.pl --job_extract --file=/dev/stdin --depth=2 --count=1[...] "EUR/CHF","1.5606 ","07.06.","12:56:57","1.5615","1.5584"," ","0.002","0.10" [...] "EUR/GBP","0.6875 ","07.06.","12:56:57","0.6896","0.6874"," ","-0.002","-0.26" [...]
I found this URL through an external data enquiry for ForEx, that you can make use of from within Excel:
$
curl --silent 'http://moneycentral.msn.com/detail/stock_quote?Symbol=/CHFEUR' | perl -ne 'm,<span class="s1">([\d.]+)<\/span>, && print "$1\n"'1.56201
...
...
env LC_ALL=de_DE cal -m -y | ~/Computers/Programming/Languages/Perl/a2ps.pl -L 'berlin-recycling.de' -ns -nt -p -fx1.25 -l 'Weißglas - jeden 4. Freitag' | ps2pdf - > ~/tmp/cal-2008-Weissglas.pdf
env LC_ALL=de_DE cal -m -y | ~/Computers/Programming/Languages/Perl/a2ps.pl -L 'berlin-recycling.de' -ns -nt -p -fx1.25 -l 'Weißglas - jeden 4. Freitag' | ps2pdf - > ~/tmp/cal-2008-Weissglas.pdf
Some programming exercises of my computer science introductional courses were to be done on an UNIVAC host.
Here are a couple of the most important utilities.
msconfig
is the System Configuration Utility.
It allows you to selectively switch on and off programs, that want to get started at system booting.
regedit
is the Registry Editor.
There are program attributes, that you can only set through regedit
.
...
You simply have to append the current Download Center code as “&Hash=DOWNLOAD_CENTER_CODE_OF_THE_DAY”,
but where will you get the “DOWNLOAD_CENTER_CODE_OF_THE_DAY” from?
Get the Microsoft Genuine Advantage Diagnostic Tool known as mgadiag.exe
,
search for it, together with “download”,
and this tool will display the “DOWNLOAD_CENTER_CODE_OF_THE_DAY” for you.
If you ever happen to encounter this error message:
The following steps of the repair failed:
Renewing the IP address. |
Refreshing all DHCP leases and re-registering DNS names. |
Please contact your network administrator or ISP.
or in a cmd console window it looks like this:
ipconfig/renewWindows IP configuration
An error occurred while renewing interface "LAN ...":
The system cannot find the file specified.
..., listen to this:
If it looks like a problem with the DHCP client, it is a problem with the DHCP client, so go and re-start the DHCP client service.
If you ever happen to get told, your TCP/IP stack needs resetting, re-installation, ..., have a look at this:
I was pointed to the following Microsoft Knowledge Base Article Q299357.
Basically it tells you to run this command line:
netsh int ip reset [log_file_name
]
Actually when I was told to reset the TCP/IP stack, it was mere rubbish. As it seriously turned out, the reason for my problems was a stopped DHCP client service and nothing else.
Once in a while[31], you have to `recover the Win95 registration'. These are the commands I have to use (mainly in the german context):
"press F8 during startup" "choose the option `Nur Eingabeaufforderung'" "cd c:\\windows" attrib -h -r -s system.dat attrib -h -r -s system.da0 attrib -h -r -s user.dat attrib -h -r -s user.da0 copy system.dat system.bak copy user.dat user.bak copy system.da0 system.bak copy user.da0 user.bak "restart the computer"
Nowadays they call it a micro-kernel. Its main principle is to avoid including things into the kernel, whenever they can be done outside, too. Once I dreamt of using its threads for implementing Ada tasks, but that time did not come yet, I think. But the way to a GNAT retargetted to an Intel Hurd using Mach pthreads seems not that long any more. And then you can really do it.
I would like to see it very soon, better today than tomorrow. It's an operating system with so many features so many years being waited for now. Well, sorry, I must admit, I can't contribute to it. As Gordon Matzigkeit (@cite{[Gordon_Matzigkeit]} said `shortly' release UK02p13 is not stable. Also my own second approach failed painfully, because I had to learn,
that Hurd development is being done on BSD platforms,
which means that currently you can bootstrap Hurd only, if you have all those necessary BSD utilities,
and that BSD uses a disk partitioning and labeling scheme rather different from and (not really) incompatible with Linux and others.
I used it, when I participated in the Ada compiler port to this operating system, and I have to use it nowadays once in a while, because it's the platform of one of my Internet providers. Its command shell is quite unique and very valuable: it knows about programs' parameters.
I would prefer considerung DOS as a `no comment' object, but ...
A boot loader is that piece of software you run to start up your os. A list of boot loaders:
the Linux loader
the former primary Linux loader (using the PC BIOS)
`load linux' from within any kind of Super or Hyper CP/M
(what's the name of that tiny OS serving as a more flexible boot loader? I think I read about it in a Linux `.announce' newsgroup.)
A list of file systems:
(UNIX community ...)
(Linux community ...)
(Linux community ...)
(Linux community ...)
also used by Atari TOS (because it is the MC68k port of CP/M) and by Microsoft DOS (because it is the Intel port of CP/M), extended in order to allow for `long names' and called vfat like `virtual FAT';
in the Linux environment they are called msdos, resp. vfat;
there are many other approaches of `long file names' on top of FAT directories, another Linux approach is called umsdos; smb is a FAT network emulator, its free unix implementation is called samba; in Linux you even have smbfs, that's kernel support, so you can unix-ish mount smb file systems - great isn't it?
nowadays there are NFS server and client programs for every kind of platform, even for MS Windows an NFS client is available
the free unix auto mounter daemon is the alternative to fixed mounting of any kind of file system, instead you get mounting on demand
A list of file system utilities:
a package of utilities for use with several types of Linux file systems[32]; I really enjoy the idea, that the software I use can access `continuous files' - read and write files without seeking the whole disk all over;
authors: Stephen Tweedie and Alexei Vovenko
checks logical file system integrity and does a physical disk check, which I consider most valuable; I always recommend this to my clients using Microsoft operating systems; I do not know an equivalent UNIX utility yet;
...
PHP web-based project management framework that includes modules for companies, projects, tasks (with Gantt charts), forums, files, calendar, contacts, tickets/helpdesk, multi-language support, user/module permissions and themes.
(No corresponding dmoz entry, but I liked this place in the directory hierarchy, as it closely resembles, where you can find the entry RPM in dmoz.)
See also:
Certain Linux distributions (starting with Redhat, including SuSE and probably others) use the rpm suite of utilities.
A few practical examples:
rpm2cpio.../fetchmail-6.3.2-15.src.rpm
| cpio -tv # just list the contents rpm2cpio.../fetchmail-6.3.2-15.src.rpm
| cpio -iv # extract the files to the current working directory
rpm -bp package_spec
+
rpm -bp /usr/src/packages/SPECS/suck.spec # %prep
rpm -bc /usr/src/packages/SPECS/suck.spec # %build
test the package:
rpm --install --test /usr/local/src/fetchmail-4.1.2-1.i386.rpm
rpm --install --hash --force /usr/local/src/fetchmail-4.1.2-1.i386.rpm rpm --upgrade --hash --force /usr/local/src/fetchmail-4.1.4-1.i386.rpm rpm --upgrade --hash --nodeps --force /usr/local/src/fetchmail-4.1.7-1.i386.rpm
If it's a .rpm not from SuSE,
it's probably depending on non-existing packages
- maybe SuSE name them differently,
so you need a --nodeps
,
if you're sure, the dependencies are ok.
Found good documentation in docs/queryformat
- very, very good!.
This lists `all' packages installed (sorted by package names and ...):
rpm --query -a | sort
Which package owns $FILE:
rpm --query -f $FILE rpm --query -f $(type -a -p imapd)
This lists a package together with all its files:
rpm --query --queryformat "[%{=NAME}-%{=VERSION}-%{=RELEASE}:\t%-50{FILENAMES} %10{FILESIZES}\n]" zip-2.0.1-1 rpm --query --queryformat "[%{=NAME}-%{=VERSION}-%{=RELEASE}:\t%-50{FILENAMES} %10{FILESIZES}\n]" -a # *all* packages
Here is one suggestion, to solve the Y2K migration problem, which behaves well, w/o the need to change old data: You regard the higher valued digit of a 2 digits year representation as a hexadecimal digit. (Well, I'm not going to explain hexadecimal numbering here, and in case you don't know it already, you cannot be assigned a Y2K job IMHO.) Here is an example showing the physical presentations of the years 1990 through to 2010:
Example 1.1. ...
90, 91, 92, 93, 94, 95, 96, 97, 98, 99, A0, A1, A2, A3, A4, A5, A6, A7, A8, A9, B0.
This method is actually applicable,
if you use ASCII (or derivative of if), EBCDIC, or even BCD,
although you might want to give it another name
instead of Binary Coded Decimals.
This method does not change your space requirements
for your date presentation, 2 bytes resp. 2 nibbles,
it gives you the same comparison results (in ASCII and in BCD),
and if you want to print out a 2 digit year presentation
in 4 digits you have to do something like this:
Example 1.2.
if ($yy <= '99') { $yyyy = '19'.$yy; } else { $yyyy = $yy - 0xA0; $yyyy = '20'.$yyyy; }
If you fail to apply yy2yyyy conversion
and present a A3 like year number it's only scaring,
that's still not actually damaging anybody,
it's only looking weird until people get used to it
and until you fixed the print out method.
If you actually want to quote my approach in public
and you want to give me proper credits[33],
pls refer to it as the Hayek Y2K approach.
[21] once again, this is to remind myself of something really trivial
[22] into the Microsoft developer studio
[23] that is used for the form's action
-attribute
[24] I have to find a more serious name for that
[25] caveat: that's all software of either the early or the late 1990's
[26] Java code generation is certainly not concerned
[27] you know IBM invested membership fees at OSF, so this is the ROI
[28] SNI is the merger of Siemens' and Nixdorf's non-IT departments plus some experts
[29] they call it `SINIX' - isn't that awfully impertinent!?! - but forget about that name!
[30] which is unrenouncable in production environments, where you distribute hundreds and thousands of reports at night
[31] for reasons, that only G'd knows ...
[32] and a homonymous DOS, Win95, ... utility :-)
[33] not like certain people from former eastern german `academical' institutions