core network(2)

Upload: agamnigam10

Post on 05-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Core Network(2)

    1/8

    CORE NETWORK

    Core or central portion of a telecommunication network that is necessary for

    providing all the services to customer that is connected to the access network is

    known as core network. Some of the main functions of the core network are toroute or switch call across the PSTN and to provide path for the exchange of

    information between the different sub-networks.

    NETWORK TOPOLOGY

    It is the design or layout pattern of all the interconnections of various elements

    of a network or core network. It could be designed in both physical and logical

    pattern. Cable installation location and devices used refers to the physical

    topology, while the logical topology is about how data is transferred actually.Shape or structure of a network can be understood by the help of topology.

    Design Proposals for the core network based on lowest cost.

    For providing reliable connection for the transmission there should be use of

    such a topology which provides continuous data transmission. As here our area

    is not concern in a small region so it would be better if we use mixed network

    topology in which LAN & WLAN both are in use. As our production is

    distributed with in a building in between two floors so here we may use LAN asLAN itself having high data transmission rate. Using LAN for a single building

    also looks quite cost efficient too. Use of LAN will helps in data transfer up to a

    great transfer rate of in GBPS as due to being connected with wire transfer of

    data will be there with a huge transfer rate. While in case of WLAN wireless

    network transfer rate will be there only in MBPS because wireless network are

    quite slow as compare to wired one and secondly as in wireless networks there

    is no such data transferring media is there.

    In case of remote data transmission like between two buildings we may use

    mixed topology. In mixed topology WLAN is used to making connection

    between two buildings and afterwards within that building also LAN should be

    used. By using such a network topology SMG servers will be available for both

    live and recording feeds. Due to use of mixed topology first it will be quite cost

    efficient and secondly the maintenance of such network topology will be quite

    easy to handle and ease in keeping record of all also. There may be in

    somewhere small depth in data transfer rate due to change in channel mode

  • 7/31/2019 Core Network(2)

    2/8

    from LAN to WLAN and vice-versa. But if possible it will be better if we

    replace the whole on LAN itself.

    For end-user connection at the time of events like concerts and sports servers

    should be in high-availability condition, and should be able to handle theupcoming traffic. As concerts and sports both are required to be online all the

    time as unavailability of service may result in loose of revenue and customers

    confidence for service. End-users only want that the content for which they are

    looking for must be precise and do not let them to wait for a long time for

    accessing it. So the main target to attract the traffic in our content is to make

    end-user comfortable enough for their contents and make those contents highly

    available for them 24x7.

    Design proposal on the basis of Best performance:

    High performance networks are very much essential and recently addition to

    tools used by plans and purchasers in order to attempt for cost reduction. In

    terms of efficiency and quality improvement these are often in description.

    These networks are in prevalence and are introduces only in selected markets,

    and in these markets these are offered without any type of enrolment and as they

    are distributed without any type of enrolment these are largely dependent on

    employers which includes high-performance networks as their sponsored

    network infrastructure.

    Network with high-performance are generally not distinct in their products, but

    they are just an option for different platform dependent products and most

    commonly they are preferred provider organization (PPOs).

    The high-performance networks differ across their plans as per their exact

    specifications. The cost-sharing differentials are corresponds the most

    commonly model uses tiered-provider levels. The first tier consists of the high-

    performing providers and secondly the second tier may consist of remainder of

    in-network providers and the last one the third consists of out-of-network

    providers. The employers often do not differentiate cost sharing with the first

    and the second tiers as the first having high-performing while the second

    consists of remainder of in-network providers, offering these networks only as a

    source of information to their employee abut which providers have better

    performance.

  • 7/31/2019 Core Network(2)

    3/8

    The most often target physicians plan are generally specialists for high

    performance networks. Multispecialty groups may practice pressed plans to

    include all of their specialities as assessments of companies to include in these

    networks are conducted by their speciality. And these plan criterias are used to

    select specialties to include focus on these following things:

    Representation of Information sharing on a very large scale. Reflecting variation significance of cost and quality up to great extent. With the practice level efficiency and quality there is generation of

    sufficient claim to the clients.

    Benchmark performances have been established on the basis of qualitymeasure and/or guidelines.

    Data storage capacity for sports videos:

    After recording of each and every concert or sports events SMG needs to store

    that recording for the website. And for that they need a huge data storage and

    that data should also be accessible online as end-users needs to view these

    recordings as per there requirements. So, for accessing such data through huge

    traffic these data should be mirrored to more than one link. Mirroring of data

    may help in handling web traffic for these recording on website. With the

    mirroring a single websites gets open from different servers which increases the

    website accessing speed and also increase the access rating of that website. End-

    user do not get to feel any type of difference during accessing the mirrored

    website and feels satisfying from the service. And the only things to increase the

    traffic is to make the end user feel satisfying by the service.

    For storage RAID architecture may be used in which data will be stored at more

    than one place and it will also help in web accessibility of this data. By usingRAID data gets mirrored at more than one place or we can say same data gets

    stored at more than one place with same type of storage structure. Benefits of

    doing so is that we are removing the risk of any type of disaster also as due to

    mirroring we not only copying the data at many places but also developing such

    a system which make us capable to recover the whole data if it get damaged any

    how during any miss happenings.

    By the use of RAID data is also become safe and easy to recover in case of any

    disaster. As with the help of any of the mirrored data the whole data set is easily

  • 7/31/2019 Core Network(2)

    4/8

    recoverable and reusable too. In case of resilience also there is no need to

    transfer the whole setup to any other place but just move any mirror and

    recovery that whole data there. Each node of mirror will go to keep a keyset

    which will be used to recover the data set. With the help of certain data set and

    the keyset RAID mirror image can be able to recover the whole data set related

    to that key and that datasets. So, here by the name of one we can do two jobs

    one is to handle the website traffic by diverting the traffic to mirrored sites and

    making the clients to access the data from different positions for the users from

    different regions. And secondly we here by preparing ourselves for disaster also

    in such a way that anyhow by any sort of damage we will be able to recover our

    whole data as such.

    Minimisation of Server Computer Architecture investments:

    Traffic on any network will be only due to the content of that network. As here

    production stuff is sports and concerts videos, so as much as good such stuff

    will be there traffic on the website will increase. And as the traffic increases

    need of server also increases. For the setup of servers a huge capital will be

    required, but the efficient use of resources available such investments may get

    reduces up to a great extent.

    Management of web contents is the one thing and handling the upcoming traffic

    is the other one. Data can managed by the way discussed above by using some

    content management architectures like RAID etc. but for handling the traffic

    high-availability servers are to be maintained. Along with high-availability

    website mirroring can also be done which may also serve a huge traffic and

    make the end-user satisfying for the contents.

    For the high availability of server it will be better if along with data servers ifwe maintain certain load balancing servers also, which going to handle the

    expected huge bulk of users coming on website. Here we need to maintain more

    than one datacentres in parallel with each other and Load balancing server

    divides the traffic on the website after it reaches the maximum number of

    requests allotted to get handled by a particular server. When requests goes on

    increment extra traffic will be diverted to the next datacentre server which

    responded as same as like the main server as both are of same configuration and

    with the same content available on both.

  • 7/31/2019 Core Network(2)

    5/8

    For this there is no need of such a huge bulk of supporting staff but instead of

    this only four to five individuals will be sufficient of the administration and

    regulation of that server. By such architecture production staff and supporting

    staff gets reduces and company goes to a huge profit.

    Partition of data:

    In the data partition there will be requirement of email, database and website.

    As we are going to maintain our own server so such requirements are going to

    be fulfilled very easily as for website we need a domain and on that domain we

    can easily allocate an email id for any user or for our staff. For database we are

    itself having a huge amount of data storage for concert and sports videos sosome space form there can be used as our database too.

    By doing so, we can maintain log and administrate the usage of data by all the

    end-users and management teams. As we only have to design such an

    architecture which shows us the content regulation by each user and

    management staff. This may also be an advance feature for the website which

    increases the traffic on this.

    Storage:

    We are storing videos of our concerts and sports events of stadium. Videos of

    the whole event will be quite huge in size and there may be such possibility that

    number of videos will be there for a single event. So, there should be such a

    space in size that videos get easily stored without any problem. As per now each

    video is allocated a disk size of 2GB which is a quite huge amount for a single

    video. In case of being more than one video total number of videos can bestored at any time should kept as less than 1000. There is an especial privilege

    for the staff as they will get a disk space of 5GB instead of 2GB. Such, a

    privilege motivate end-users to be a member staff of the organization which

    definitely help it one or the other way.

    Remote Access and Security:

  • 7/31/2019 Core Network(2)

    6/8

    Data and network security plays a very vital role in web world as, with the

    increase in reputation the risk factor also gets increases to maintain the integrity.

    So, firstly the network should be secure enough that there will be no

    unauthorized access in the network and other one is the security of the data

    content. Mainly, the network of production staff and admin/management staff

    both should be 100% secure both on site and also remotely. Because any

    unauthorized data access may results in a huge loss to the firm. Once if any

    unauthorized entrance will be there then it may create much more loop whole

    for future use to get enter in the network. So it is quite necessary to build such a

    safe and non-penetrating network architecture that no one can get access to it

    and if tried to do so get easily traced. Tracing of such an activity is also very

    important because by doing so we may get much more information that what

    are the flaws available in our existing network and server architectures which

    help in improving our existing setups.

    So, firstly our network should be secure enough that no one gets enter into it

    and next is about our servers, as our servers will be online and will accessible

    around the globe. Anyone from any part of the world may attack our servers and

    try to make them compromised. So we must be ready for such an activities, this

    can be done by keeping continuous log of website and all other type of services

    provided by our servers. Maintenance of log files for each and every activity onwebsite also helps in improvement of contents and satisfying our clients as per

    their desire which they want from our servers to serve. We can get to know

    about the desire of our client by analysing the type of contents searched by them

    on the site and by the type of content they are usually used to access. So from

    the next time we may develop such an search engine which works on the basis

    of logs of each users.

    Backup:

    As day by day our data sets are goes on increasing and due to being online 24x7

    we cannot let our servers down even for a single moment. So, it is not a good

    idea to keep all the data at a single place, all the data should be kept at more

    than one place and the main data should be backed up time to time which keeps

    the integrity of content and also maintain. Backup of maim content can be taken

    on Tape or on Discs on daily, weekly, monthly or yearly basis. But as the

    backup is taken after short interval like week so, any type of miss happening is

  • 7/31/2019 Core Network(2)

    7/8

    recoverable up to maximum extent as we will have maximum data between two

    backup spans.

    Short span backups will be seems to be better in case of any miss happenings if

    happens anyhow. Because if backups are if small interval then it will be quiteeasy to recover the maximum of data available and if maximum data is being

    recovered if will be better beneficial for the firm. Backup of server are done by

    keeping there kickstart files in which there whole setup configuration are there.

    By the help of that kickstart configuration file we can easily setup the whole

    server architecture again in a short while.

    Summary:

    For the organizations like IFPC where there are distributed work architecture,

    are firstly must design there communication architecture very well because in

    such organization which provides any sort of service over internet should have

    high performing network and essentially their website should be high

    availability factor 24x7. As IFPC is providing service of videos over internet

    with the recordings of concerts and sports and as per analysis there are a huge

    bulk of internet search are there for videos available online regarding any

    concert or of any sports events. Even after a long time also search of any

    content is not going to be decrease because no one knows when there will be

    need of any type of data available. So, by keeping such things also we have to

    be preparing with our data centre and with website any search on website

    should respond to the best of anyone can. Such a service will definitely increase

    the popularity of website and also populate it with the huge amount of users.

    For that our online availability is very much important all the time. Any

    moment if our servers failed to fulfil the requirement of any client that client not

    going to return to website again. So, firstly there internal network should be on

    LAN. It will be better as LAN provides a quite good. Keeping servers on LAN

    make the network connectivity much more reliable and continuous data

    transmission with a huge transfer rate. For the high availability of server it will

    be better if along with data servers if we maintain certain load balancing servers

    also, which going to handle the expected huge bulk of users coming on website.

    Here we need to maintain more than one datacentres in parallel with each other

    and Load balancing server divides the traffic on the website after it reaches the

    maximum number of requests allotted to get handled by a particular server.

  • 7/31/2019 Core Network(2)

    8/8

    When requests goes on increment extra traffic will be diverted to the next

    datacentre server which responded as same as like the main server as both are of

    same configuration and with the same content available on both.

    Next thing is about backups, with the maintenance of such an excellentarchitecture by maintaining such a servers, we have to keep backup of the whole

    architecture so that we can easily set it up again if anything wrong happens like

    disaster. So, backup of server are done by keeping there kickstart files in which

    there whole setup configuration are there. By the help of that kickstart

    configuration file we can easily setup the whole server architecture again in a

    short while. And in case of datacentre we need to keep backup of the whole

    system time to time and keeping in mind that this backup time interval should

    be small. Because small interval works more satisfactorily during data recoveryand gives the maximum data to get recovered.