From:
Larry Mixson
To:
Elizabeth
Date: Tue, 17 Mar
1998 09:54:06 -0500
Subject: Job Descriptions
Below is a more detailed job description of my last few jobs. Can you
view Microsoft Word documents? If so I'll send you a copy of my resume
in MS Word format other wise I'll have to convert it to straight text
and it will not be as "pretty".
Bell Atlantic Video Services - 1996-current Technical Description
Overview:
Bell Atlantic Video Services purpose is the creation of video networks
to compete with Cable TV. My responsibilities are the selection,
procurement, installation and integration of computer servers to support
Bell Atlantic's video services business. These servers fall into two
categories: video servers and business support servers. Selection of the
servers involved defining requirements, soliciting vendors for product
information, evaluating the products for cost and performance and then
selecting the vendor-product. In addition, I was involved with the
requirements and specification of software for a video set top box (i.e.
cable box).
Video Servers:
Video servers supply digital video to customers from video content
stored on disks. The video servers fall into to two classes: Video On
Demand (VOD) and Near Video On Demand (NVOD). VOD servers
supply many multiple simultaneous video streams to customers whenever
the customer desires to watch the video. VOD also supports VCR features
such as rewind, fast forward and pause. NVOD servers also supply
multiple simultaneous video streams but much fewer streams (e.g.
typically 40-60) and at fixed play times (e.g. a movie would play at
half-hour increments during the day). NVOD does not support VCR
features.
For NVOD servers I implemented Silicon Graphics Challenger systems. The
system consisted of dual redundant servers connected to 150 MB of shared
RAID for 140 hours of video storage. The system delivered 40 to 60
simultaneous, 3Mbps MPEG2 encoded video streams. The output was
delivered via two (per server) ATM ports into a Fore Technologies ATM
switch into Bell Atlantic's video network. Dual redundant servers
allowed for one of the servers to fail and the other server to pick up
the video streams with minimal interruption perceived by the customer.
The NVOD software was developed by SGI. One such system was fully
implemented and integrated into Bell Atlantic's first video system.
Implementation also included integration of the server with the Business
Support Systems (BSS) that managed the scheduling of the movies to play.
For VOD I performed a detailed evaluation and analysis of next
generation video servers. (Note: The first generation VOD server was
installed, prior to my employment, and used for a VOD trial with 1000
customers.) Video servers by Sun, SGI, HP and NCube were investigated
with the final analysis focused on SGI and NCube. The requirements
called for a server that could deliver up to 10,000 simultaneous 3Mbps
MPEG2 encoded video streams.
The requirements called for the initial server to deliver 1,000 video
streams and then be capable of expanding to the full 10,000 streams. In
addition, the system needed to support enough disk capacity for 750
hours of video content. Only the new generation of servers by SGI and
NCube based on hypercube and NUMA architecture were deemed up to the
task. One of the major problems with delivery of large numbers of
simultaneous video streams is the I/O bandwidth through the system. The
system must be able to read the video data from disk, pump it through
the system and out the ATM ports.
Technologies including disks, SCSI, Fibre channel, bus bandwidth, ATM
bandwidth and processing speed were evaluated. A major problem is the
number of simultaneous video streams that can be read from a disk or
disk array. With SCSI interfaced disks then the bottleneck became the
SCSI port at 20 Mbps (or 40Mbps for ultra wide). Fibre channel increased
this to 100Mbps. To help overcome this the video content was stripped
across multiple disks (as many as 10). Even so the systems have to have
as much as twice the disk storage as necessary (to support the 750hrs of
content) to deliver the required number of video streams. Extensive
tests were conducted at the selected vendor sites to measure system and
component performance. Oracle Video Server software was selected to
provide video streaming.
Data Carousel Server:
I am responsible for the defining the requirements, vendor selection and
development of a data carousel server for the video network. A data
carousel is a small server that continuously streams out data on to the
video network. Examples of data carousel use is for the download of
software into the customer set-top-box and for delivery of the TV
electronic program guide data to the set-top-box. The data carousel
takes data from various sources, TV Guide data from a satellite feed,
download programs from programmers, packages them into a MPEG2 video
streams and pumps the video stream into the video network to the
set-top-box. I am currently in the process of working with a vendor to
develop a data carousel server to our specifications.
Business Support Servers:
I am responsible for analyzing the requirements for the Business Support
Systems (BSS) and selecting servers to support them. The BSS consisted
of several commercial off-the-shelf packages, custom developed packages
and database system. The architect is a client server based system with
UNIX servers and PC based clients. The overall "system" was distributed
between diverse locations in three states over a LAN/WAN. The system
supported all of the business processes required for a cable-TV like
video business, e.g. the customer service center, billing, movie
scheduling, customer activation, etc. After analysis of the
requirements, a combination of HP UNIX servers and Windows NT servers
were selected. The servers were configured with as much redundancy as
possible with redundant power
supplies, multiple CPUs, dual Ethernet cards, RAID storage, etc.
DynCorp - 1993-1996
Management Responsibilities
While at DynCorp Image Group I was the Program Manager for the
implementation of Image systems for the U.S. Navy. As Program Manager I
had full beginning to end responsibility for the implementation and
delivery of the system to the Navy. This included financial management,
staff management, subcontractor management, vendor management, and
client management.
Financial management included responsibility for two contracts one for
five million dollars and one for six million dollars. In this role I was
responsible for the review of all purchase orders, payment of vendors,
and payment of subcontractors. I had to perform monthly financial
reconciliation of expenditures against budgeted amounts and prepare and
present quarterly budget reviews.
Staff Management included the supervision of a team of six to ten
engineers responsible for the implementation of the system (see
technical description below).
Subcontractor Management included the management of two subcontractor
companies, one with four persons and one with 15 persons. These
subcontractors were responsible for implementation of major subsystems.
Management responsibilities included the assignment of tasks to the
subcontractors, establishing schedules, negotiating statement of works,
reviewing deliverables for conformance to agreements and approving
payments.
Vendor Management included the selection of vendors to supply equipment,
negotiating price, delivery schedules, and maintenance agreements. In
addition when technical problems arose, I worked with vendors, often on
site to resolve the problems.
Client Management included interfacing with the Navy COTR to establish
schedules, changes to the schedule, installation details, acceptance
testing, engineering change orders, and of course the normal monthly
reviews and reports.
Proposal Support: In addition to my primary responsibility for the Navy
image contract I did considerable support for proposal work. Being in
the Image Group my primary work in this area was to support the proposal
group for image proposals. In this role I would visit the client,
evaluate their requirements and determine system configuration and size.
I then would typically write the technical sections of the proposals.
Technical Job Description
Overview:
While at DynCorp Image Group I was managed implementation of a large
image system. The system is now fully operational and is in full
production. The system scans 35,000 images a day, performs data entry
from each of the images and stores the images on optical disks. In
addition the system is able to output to microfilm 10,000 microfiche a
day (800,000 images), read in 100,000 images a day from magnetic tape
and write them to optical disk, and, support 50 online retrieval
workstations. As my role as program manager I was responsible for all
aspects of the project including selection of the hardware, COTS
software, vendor selection and vendor interface, software development,
testing, documentation, installation, financial profitability of the
project, and was the primary interface to the customer. The system is
based on a client server architecture with a Sun 2000 server and PC
workstations. Also in the system configuration are five optical disks
jukeboxes, 4 8mm tape drives, 150GB of RAID, 7 Computer Output
Microfiche units, 3 high speed scanners, 2 low speed scanners, 30
operator workstations for image applications, and a TCP/IP network.
Host System:
The Sun 2000 has 1GB of memory, 6 processors, 150GB of RAID magnetic
storage, 12 SCSI interface ports and FDDI network interface. Also on the
Sun are 4 8mm tape drives for system backup, database transaction
journaling, image transaction journaling, and image input. The RAID is
dual ported so that it can be connected to a backup Sun 1000 that can be
used for limited production in case the Sun 2000 is down.
Optical Storage:
The primary image storage was optical disks stored in 5 optical
jukeboxes each containing 1000 disks for a total of 6.5 Terabytes of
storage. Initially the system had two jukeboxes each with four optical
drives. In this configuration each jukebox required a single SCSI cable
as a SCSI interface can support up to 7 devices (four drives, two
robotics, one unused). As an add on expansion to the project the
customer requested increased storage capacity and retrieval rates. To
accommodate this the existing two jukeboxes were upgraded to contain 6
optical drives and three additional jukeboxes of this configuration were
added. With 6 optical drives the number of devices exceed the capability
of a single SCSI interface so the physical jukebox unit was divided into
two logical units each with three drives and one robotics. This then
required 10 SCSI ports on the Sun host. Also because of the distance of
the jukebox placement from the host differential SCSI had to be used.
Scanning Subsystem:
The scanning subsystem consisted of three high speed (relatively
speaking) scanners (Ricoho 520) capable of 25 double sided pages per
minute for a rate of 50 images per minute and two low speed flatbed
scanners (HP scanjet). The scanner had firmware for automatic image
deskew and blank page detection and deletion. The scanners interface to
the PC via a video interface (faster than a SCSI). The PC has a large,
fast disk for local storage of images and a Kofax image interface card
for the Scanner. While scanning to the local disks a second PC read the
images from the scanning PC and uploaded them to the Sun host.
Network:
The network is a TCP/IP network with a dual FDDI backbone. The Sun 2000,
Sun 1000 and two Hughes Hubs are connected on the FDDI ring. The Hughes
Hubs each have four 10BaseT subnet cards. The Hughes Hub are intelligent
in that they will only route network traffic to devices that are on a
specific subnet. This was necessary because of the large amount of image
data being transferred over the network. The scanners, workstations and
COM units were distributed across the eight 10 Mbit subnets such that an
individual subnet would not be saturated. The 100Mbit FDDI backbone was
determined to have sufficient bandwidth for the system.
COM Output:
At the current time the primary output for the customer is film and
microfiche. The system thus has seven Computer Output Microfiche (COM)
units. Four units can produce both 16mm microfilm and 105mm microfiche
and three units only produce 105mm microfiche. The units connect to the
Sun host via the network. The four units have their own Sun workstation
to drive them and the access the host system via NFS. The three
microfiche units worked at a direct TCP/IP socket interface to the Sun
2000 host.
Workstations:
There are approximately 30 workstations in the system. These are used
for image indexing, i.e. entering data into a database from a displayed
image, QA of indexed data and images, QA of images received on tape,
printing, and on-line retrieval and display of images. The workstations
all run Windows for Networks 3.1 and are connected to the Sun host via
the 10BaseTCP/IP network.
Software:
The host system ran under UNIX (Sun's Solaris), Informix database and a
commercial Image Management product. Host applications were developed in
C and ESQL. Where possible COTS packages were used, e.g. system backup
and restore. Workstation applications were developed in Visual basic
using Kofax VBX image controls. Interface from the workstation client to
the host database was through the ODBC layer. Again third party packages
were either used or integrated, such as third party VBX controls for
image display, host database interface, scanner control, image
manipulation, etc.
Expansion:
The final phase was a major expansion of the system. The expansion
involves replacing microfiche as the primary output. To accommodate the
expansion, three additional major systems were developed to support over
1200 online users. One system was for reviewing of image records stored
in the primary storage system as described above. This system retrieved
up to 300,000 images over three months from the primary system and
stored them on a local 600GB RAID. A second system was used to input,
process and then transmit to the primary system an additional 10,000
images a day. A third system in a different city was a smaller version
of the primary storage system. All four of these systems will be
networked together using multiple LANS and WAN. In addition there was
1000 on-line retrieval workstations added to the primary system.
To increase performance of the primary system the configuration was
changed to have a Sun Sparc 20 host to drive each of the optical
jukeboxes (also increased to six). Each of the Sparc 20 hosts then was
on the FDDI ring and networked to the Sun 2000 host where the database
resided.
DynCorp: Justice Information System
Management Responsibilities
While at DynCorp I was the Project Manager for the implementation of
Justice Information System for the Montgomery County in Dayton Ohio. As
Project Manager I had full beginning to end responsibility for the
implementation and delivery of the system to the County. This included
staff management, subcontractor management, vendor management, and
client management. This project was to be a justice information system
with modules for the county clerk, prosecutor, sheriff, evidence
tracking, jury selection, court room scheduling, etc. The project was
basically a extensive work flow design and implementation effort to
automate the work flow of criminal and civil court cases from beginning
to end. The project used a combination of object oriented design and
rapid prototyping during the requirements phase. The project got about
six months into the requirements phase when it was canceled.
Staff Management the hiring a team of fifteen engineers responsible for
the implementation of the system.
Vendor Management included the selection of vendors to supply equipment,
negotiating price, delivery schedules, and maintenance agreements.
Client Management included interfacing with the County to establish
requirements, and schedules.
ICL, Office Systems 1988-1993
Management Responsibilities:
As project manager I oversaw the end to end development and release of
new products and new releases of existing projects. This included
interfacing with marketing to determine requirements, determining
schedules, resolving technical issues, writing product plans, assigning
development resources, managing testing, and ensuring delivery of final
software to manufacturing.
Prior to my role as project manager I was a development manager in which
I managed a team of developers that developed both UNIX and PC Windows
applications_ In this capacity I worked with the engineers to estimate
project size, determine technical approaches and feasibility, determine
schedules, develop the applications, application testing and
documentation. I also worked closely with marketing to translate the
marketing requirements into a workable product.
Technical Description
ICL Office Systems made and sold an office automation suite called
OfficePower consisting of several computer packages including word
processing, spreadsheet, calendar, reminder system, phone message
system, electronic mail, simple record database and others. These were
all integrated together with a common, consist user interface.
OfficePower was a classical host based system with the applications
running on UNIX hosts and the users on dumb terminals. OfficePower could
support up to several hundred users with networked hosts. OfficePower
was ported to run on several different UNIX platforms including ICL,
Sun, SCO and others. OfficePower was also truly internationalized, being
translated into over 10 languages. Internationalization was a unique
challenge to programmers in that all user presented screens and messages
had to be placed into tables to be translated before being displayed.
With the growth of PCs, ICL found that it's clients wanted to interface
with OfficePower via PCs. I was hired at ICI to create a new user
interface on PCs under Microsoft Windows using client server technology.
What we created was a moving much of the OfficePower application
interfaces from the host UNIX system to run on the PC under Windows. The
PCs were networked to the host using TCP/IP and NFS.
In addition we developed a system such that the user could use Microsoft
Word or WordPerfect on the PC but store the documents they created into
the OfficePower document management system on the host.
Ok it was a lot, I think I wanted to impress her.
Updated: 04-03-2024