Flash CMS, flash components, and even flash cms templates .
About AIMESAC&MSA

About AIMESAC&MSA

This Association came into existence on 11 Feb 1991. PAO, SAO, AOI & II, OS, UDC, LDC STENO & Erstwhile Gp D employees (Now MTS) are members of Association. Ours is an All India Organisation having branches all over the country right from Kashmir to Kanyakumari. In 1991 it was established as All India MES Clerical Cadre & Gp D Employees Association and after VIth CPC it renamed as All India MES Admin Cadre and Ministerial Staff Association.  It was recognized by Govt of India, Min of Def  on  31 Mar 1995.  

 The Association has website "www.aimesccgdea.org" . The Association has over 9000 members all over the country. Sh TD Padney and Sh DC Bahukhandi are our former General Secretaries. Presently Sh Jit Singh Sharma is President and Sh Brij Kishore is General Secretary.   The Association is working for masses.

The Association wishes a very prosperous carrier to all our members.

 JAI HIND/ JAI BHARAT

AIMESAC&MSA ZINDAABAAD.

 

JIT SINGH SHARMA, PRESIDENT    BRIJ KISHORE, GEN SECY

2.1.6.2. vRealize Log Insight

vRealize Log Insight provides central real time analytics of log messages (syslog) for hypervisors, hardware

and other components of NFVI. These could be consumed by OSS/BSS or integrated with MANO.

vRealize Log Insight can serve as a source of data to vRealize Operations Manager. This allows performing

analytics on the data from log files of the various components as well.

2.1.6.3. Site Recovery Manager

Site Recovery Manager works in conjunction with various storage replication solutions including vSphere

Replication to automate the process of migrating, recovering, testing, re-protecting and failing-back virtual

machine workloads for disaster recovery.

2.1.6.4. vSphere Replication

vSphere Replication is a virtual machine data protection and disaster recovery solution. It is fully integrated

with vCenter Server and VMware vSphere® Web Client, providing host-based, asynchronous replication of

virtual machines including their storage.

2.1.6.5. vSphere Data Protection

vSphere Data Protection is a backup and recovery solution from VMware. It is fully integrated with vCenter

Server and vSphere Web Client, providing disk-based backup of virtual machines and applications.

3rd party backup solutions that are certified for vSphere may also be used.

2.1.7 VNF Components Overview

While VNF components are not covered in this document, the vCloud NFV platform provides supporting

virtual infrastructure resources such as network, compute and storage for the successful deployment and

operation of VNFs.

2.2 Conceptual Architecture

The vCloud NFV platform offers scalable architecture for multi-site deployments, across different regions

where VNF instances can be hosted. The architecture is designed with the following principles:

x VNF instances are placed in specific sites within a region to cover a geographical area.

x VNF instance deployments conform to diversity rules and high availability demands.

x Multiple markets are supported to comply with legal and regulatory demands.

Figure 4 illustrates the conceptual design of the vCloud NFV platform.

The vCloud NFV platform consists of three cluster types to perform specific functions. The management

cluster handles the management components (MANO) while the NFVI is comprised of the edge cluster

containing the network components and the resource cluster containing the VNFs. The cluster design

ensures that management boundaries are clearly defined, capacity planned for and resources allocated as

per the specific needs of the workloads in the clusters.

VLAN based management network is responsible for the networking of most of the management

components local to the site. Management components such as vCloud Director Cells and its database are

placed on a VXLAN based management network. This VXLAN based management network is implemented

using NSX universal logical switch which allows for the creation of continuous L2 networks across sites

while universal distributed logical router enables routing across the universal logical switch.

This design approach allows the single site deployment to be used as a template for multiple sites and

placement of vCloud Director cells at remote sites for disaster recovery with little or no interruption to the

network infrastructure. While an L3 routed network may be used for connectivity of the vCloud Director cells

over a WAN link, the L2 network is easier to manage and has less networking overheads as compared to an

L3 network.

The specific configuration for a multi-site deployment of the vCloud NFV platform and NSX universal logical

router for stretching the management network across sites will be covered in the next release of this

reference architecture.

In addition to the management components, NSX based network infrastructure is responsible for East-West

and North-South traffic of the VNF workloads.2.2.1 Management, Edge and Resource Clusters

Clusters are the top-level logical building blocks to segregate resources allocated to management functions

from resources dedicated to virtualized network functions. This section discusses the vCloud NFV platform

cluster design considerations.

x Management Cluster – This cluster contains components of the MANO working domain and comprises

the servers that make up the VIM components as well as the NFVI Operations Management

components. This cluster also contains an additional subset of the VIM components to manage the

resources of the management cluster and provide networking services to the management components

in the cluster.

x Edge Cluster and Resource Cluster – These two clusters combined form the NFVI working domain.

They aggregate the underlying storage, network and physical resources for the NFVI to consume. The

NFVI is further divided into the edge cluster and the resource cluster.

The resource clusters host the VNF workloads. These can be deployed in a multitude of sites based on

region.

The edge clusters host the network components of the NFVI such as the network control plane,

distributed router and edge gateway. The edge cluster along with the resource clusters are used to

facilitate software-defined networking functionality with services such as physical network bridging,

routing, firewalling, load balancing and virtual private networking (VPN). A single edge cluster is

required at each site where a resource cluster exists, but multiple resource clusters (each pertaining to

the same or multiple VNF instances) can leverage the same edge cluster within the site.

The vCloud NFV architecture uses VMware vSphere® Distributed Resource Scheduler™ (DRS) to

continuously monitor the resource utilization of the hosts and workloads in a cluster to dynamically and

automatically redistribute resources to ensure the cluster is load balanced. DRS works in conjunction with

vSphere vMotion to move live, running workloads across hosts in a cluster with no or minimal service

disruption.

The automation level of a DRS enabled cluster can be set to fully automated or partially automated. In the

case when the level is set to fully automated, DRS continuously monitors the cluster and migrate running

workloads amongst hosts in a cluster as necessary to balance the load. With the level set to partially

automated, DRS generates migration recommendations but does not automatically migrate workloads in the

cluster. An administrator (or an orchestrator) can manually approve the migration of running workloads after

evaluating the DRS recommendations.

VMware recommends using DRS in fully automated mode for the management cluster. The edge and

resource clusters should have their DRS automation level set to partial to avoid automatic run time

migrations since certain classes of VNFs may have strict performance and network

characteristics/requirements. This setting can be changed if required after assessing any potential impact to

the VNF workloads.

vSphere vMotion offers a significant advantage for maintenance and disaster recovery scenarios for all the

workloads. When a host needs to be brought offline for maintenance, the vMotion feature allows

administrators to move the running workloads to other hosts in the cluster ensuring service continuity. Once

the host is back online, the workloads can be migrated back to the host. An administrator can trigger the

movement of live, running workloads irrespective of the automation level of the cluster for DRS and

maintenance activities.

Shared storage requirements are fulfilled at the cluster level via Virtual SAN which pools the local storage

contributed by the member hosts in the cluster. Since data is distributed across multiple hosts in the cluster,

it is protected against host failure with no single point of failure. The vCloud NFV platform also supports

certified 3rd party shared storage solutions.

Replication to VTEPs on other remote L3 subnets is accomplished by sending a single unicast copy to one

VTEP per remote L3 subnet. This VTEP will then subsequently provide multicast replication to other VTEPS

within the same broadcast domain. This mode is recommended as it uses the native fan-out functionality

within the physical network, whilst removing the complexity of having to implement PIM for L3 multicast

routing.

3.1.3.1. Transport Zones

A transport zone is used to define the scope of a VXLAN overlay network and can span one or more

vSphere clusters. VXLAN overlay networks are used to provision VNF networks for East-West VNF traffic as

well as networks for North-South traffic through NSX logical routers and edge service gateways. This

reference architecture uses a single VXLAN transport zone to handle all VNF traffic and a second VXLAN

transport zone to handle management workload traffic that need to span geographical locations in case of

disaster recovery.

3.1.3.2. Routing

Different levels of routing must be considered within the environment. Edge Service Gateways (ESGs)

handle North-South routing into and out of the NFVI. East-West routing is handled either by NSX distributed

logical routers (DLR) or through the deployment of additional ESG, allowing communication between VNF

workloads in different subnets where required. The logical routing design is based on per VNF requirements,

and should be decided as part of the onboarding process.

3.1.3.3. Transit Network and Internal Tenant Dynamic Routing

It is not possible to directly connect Edge service gateway and DLR devices, so a logical switch is used as a

NSX transit VXLAN for the purpose of creating connectivity between them. Both the primary Edge service

gateway and DLR must be configured to peer using OSPF dynamic routing or static routes, thus providing

end-to-end connectivity within the VNF instances.

The requirement to provision a transit network depends on the North-South connectivity requirements of the

VNF deployed in the vCloud NFV platform. This should be decided as part of the VNF onboarding process.

3.1.3.4. WAN Connectivity

The vCloud NSX platform supports the creation of Layer 2 adjacency between applications residing on two

different locations. This is achieved with the use of NSX v6.2.1 universal logical switches. Connectivity

between the sites is provided through a wide area network (WAN) link with stretched Layer 2 networks

presented across sites with a maximum of 20ms round trip time latency. This is the maximum latency that is

allowed for remote vCloud Director cells to access local vCenter Servers over the WAN link.

As an alternative, L3 routing may also be used across WAN links, however this introduces additional

management and network overhead. This is supported as long as the latency requirement for vCloud

Director cells can be met.

For a full description of the design consideration relevant to an NSX deployment, VMware recommends that

the NSX design guide4 be reviewed in conjunction with this document.

Both the vCenter Server instances are virtual appliances with an embedded database and external Platform

Services Controller (PSC). The appliance-based deployment model is an alternative to the Windows-based

system. It is preconfigured, hardened and faster to deploy. Use of the appliance allows for a simplified

design, ease of management and reduced administrative efforts; however the Windows-based system may

also be used.

The Platform Service Controller (PSC) contains common infrastructure services such as VMware vCenter™

Single Sign-On, VMware Certificate Authority (VMCA), licensing, and server reservation and registration

services. Each vCenter Server utilizes a pair of load balanced PSCs that are linked together. Both PSCs of

the same vCenter Server are joined together into one Single Sign-On domain. An NSX ESG is used as the

load balancer for the PSCs. This vCenter Server architecture allows for scalability. As the infrastructure

grows additional vCenter Servers and PSCs may be added.

vCenter Server system availability is provided through the use of vSphere HA for the appliance and vSphere

replication for a point-in-time backup of the vCenter data.

3.2.2 NSX for vSphere Manager

The NSX for vSphere Manager provides the centralized management plane for the NSX for vSphere

architecture and has a one-to-one mapping with a vCenter Server. It performs the following functions:

x Provides the central configuration UI and the REST API entry-points for NSX for vSphere.

x Is responsible for deploying NSX Controller Clusters, Distributed Logical Routers (DLRs), and Edge

service gateways in the form of OVF format virtual appliances.

x Is responsible for preparing ESXi hosts for NSX for vSphere by installing VXLAN, distributed routing,

and firewall kernel modules as well as the User World Agent (UWA).

x Generates self-signed certificates for the NSX Controllers and ESXi hosts to secure control plane

communications with mutual authentication.

As with vCenter Server, two instance of NSX Manager are deployed, one is used to provide network

services to the MANO components such as load balancer for high availability configurations. Th3.2.3 vCloud Director

vCloud Director introduces resource abstraction and controls by creating dynamic pools of resources that

can be consumed by NFVI tenants. Each NFVI tenant sees only the resources that have been explicitly

defined during the VNF onboarding process or through a subsequent request to the tenant operations team.

A highly available vCloud Director implementation using multiple vCloud Director cells is deployed in a

vCloud Director Server Group. All cells in the server group are stateless and use a shared, highly available,

clustered database.

vCloud Director interaction with RabbitMQ message broker server and VMware vRealize® Orchestrator™

workflow engine can be used to expose public APIs to northbound management and orchestration

components such as VNF Managers.

To manage virtual resources, vCloud Director connects to the resource vCenter Servers and NSX for

vSphere Managers for each region. Connectivity can be handled by any cell in the server group, however

VMware recommends that the vCloud Director cluster be scaled with an additional cell each time a new

vCenter Server is added. Figure 9 shows the vCloud Director cell design.

Highlights

EC Meeting Corrs [Download] ...Read more

Meeting with E-in-C [Download] ...Read more

Letter to E-in-C for CML [Download] ...Read more

Seniority list of PS [Download] ...Read more

CSD LETTER FOR FAMILY PENSIONERS [Download] ...Read more

Corrs with higher auth [Download] ...Read more

DPC from OS TO AO II [Download] ...Read more

Seniority list of PAO & SAO [Download] ...Read more

Cadre review letter to Commands [Download] ...Read more

Cadre review letter to Commands [Download] ...Read more

LETTER TO E-IN-C FOR PROMOTION UDCs [Download] ...Read more

Revision of policy 2007 [Download] ...Read more

Sanction of President on Cadre Review [Download] ...Read more

Contact Info
an image
Organization Co.
CHQ : CEDZ DELHI CANTT - 110010
Email: info@aimesccgdea.org

Phone: (011) 25699244
Fax: (011) 25699244