Wednesday 10 June 2015

Free Inplant Training in Chennai

Deleting Secret Data in cloud IEEE PROJECT 2015- 2016      

 

 

    FREE INPLANT TRAINING IN CHENNAI






CHAPTER 1

INTRODUCTION

Cloud computing is an internet-based computing technology, where shared resources such as software, platform, storage and information are provided to customers on demand. Cloud computing is a computing platform for sharing resources that include infrastructures, software, applications, and business processes. Cloud Computing is a virtual pool of computing resources. It provides computing resources in the pool for users through internet. Cloud computing is an emerging computing paradigm aims to share storage, computation and services transparently among a set of massive users. The exact definition of cloud computing is a large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet [5].
Current cloud computing systems pose serious limitation to protecting users’ data confidentiality. Since users’ sensitive data is presented in unencrypted forms to re-mote machines owned and operated by third party service providers, the risks of unauthorized disclosure of the users’ sensitive data by service providers may be quite high. There are many techniques for protecting users’ data from outside attackers. An approach is presented to protecting the confidentiality of users’ data from service providers, and ensures service providers cannot collect users’ confidential data while the data is processed and stored in cloud computing systems. Cloud computing systems provide various Internet based data storage and services. Due to its many major benefits, including cost effectiveness and high scalability and flexibility, cloud computing is gaining significant momentum recently as a new paradigm of distributed computing for various applications, especially for business applications along with the rapid growth of the Internet. With the rise of the era of cloud computing, concerns about Internet Security continue to increase.
1.1.1. EVOLUTION OF CLOUD COMPUTING
Cloud computing began to get both awareness and popularity in the early 2000s. When the concept of cloud computing originally came to prominence most people did not fully understand what role it fulfilled or how it helped an organization. In some cases people still do not fully understand the concept of cloud computing. Cloud computing can refer to business intelligence (BI), complex event processing (CEP), service oriented architecture (SOA), Software as a Service (SaaS), Web-oriented architecture (WOA), and even Enterprise 2.0.
With the advent and growing acceptance of cloud-based applications like Gmail, Google Calendar, Flickr, Google Docs, and Delicious, more and more individuals are now open to using a cloud computing environment than ever before. As this need has continued to grow so has the support and surrounding infrastructure needed to support it. To meet those needs companies like Google, Microsoft, and Amazon have started growing server farms in order to provide companies with the ability to store, process, and retrieve data while generating income for themselves. To meet this need Google has brought on-line more than a million servers in over 30 data centers across its global network. Microsoft is also investing billions to grow its own cloud infrastructure. Microsoft is currently adding an estimated 20,000 servers a month. With this amount of process, storage and computing power coming online, the concept of cloud computing is more of a reality than ever before. The growth of cloud computing had the net effect of businesses migrating to a new way of managing their data infrastructure. This growth of cloud computing capabilities has been described as driving massive centralization at its deep center to take advantage of economies of scale in computing power, energy consumption, cooling, and administration.
1.1.2. CLOUD ARCHITECTURE
The architecture of cloud involves multiple cloud components communicating with each other over the application programming interfaces (APIs), usually web services. The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen by the client, i.e. the customer. This includes the clients network or computer, and the applications used to access the cloud via a user interface such as a web browser. The back end of the cloud computing architecture is the cloud itself, which comprises of various computers, servers and data storage devices.
The general architecture of cloud platform is also known as cloud stack given in figure 3.1 [5]. Cloud services may be offered in various forms from the bottom layer to top layer in which each layer represent one service model. The three key cloud delivery models are software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Infrastructure-as-a-Service (IaaS) is offered in the bottom layer, where resources are aggregated and managed physically (e.g., Emulab) or virtually (e.g., Amazon EC2), and services are delivered in forms of storage (e.g., GoogleFS), network (e.g., Open ow), or computational capability (e.g., Hadoop MapReduce). The middle layer delivers Platform-as a-Service (PaaS), in which services are provided as an environment for programming (e.g., Django) or software execution (e.g., Google App Engine). Software- as-a Service (SaaS) locates in the top layer, in which a cloud provider further confines client flexibility by merely offering software applications as a service. Apart from the service provisioning, the cloud provider maintains a suite of management tools and facilities (e.g., service instance life-cycle management, metering and billing, dynamic configuration) in order to manage a large cloud system.
Cloud deployment models include public, private, community, and hybrid clouds  Public clouds are external or publicly available cloud environments that are accessible to multiple tenants, whereas private clouds are typically tailored environments with dedicated virtualized resources for particular organizations. Similarly, community clouds are tailored for particular groups of customers [3].

 




 

1.1.3. CLOUD SECURITY CHALLENGES
The world of computation has changed from centralized to distributed systems and now we are getting back to the virtual centralization which is the Cloud Computing. Location of data and processes makes the difference in the realm of computation. We have the cloud computing wherein, the service and data maintenance is provided by some vendor which leaves the client/customer unaware of where the processes are running or where the data is stored. So, logically speaking, the client has no control over it. The cloud computing uses the internet as the communication media. When we look at the security of data in the cloud computing, the vendor has to provide some assurance in service level agreements (SLA) to convince the customer on security issues. Organizations use cloud computing as a service infrastructure, critically like to examine the security and confidentiality issues for their business critical in-sensitive applications.
Traditional security issues are still present in cloud computing environments. But as enterprise boundaries have been extended to the cloud, traditional security mechanisms are no longer suitable for applications and data in cloud. Traditional concerns involve computer and network intrusions or attacks that will be made possible or at least easier by moving to the cloud. Cloud providers respond to these concerns by arguing that their security measures and processes are more mature and tested than those of the average company. It could be easier to lock down information if it's administered by a third party rather than in-house, if companies are worried about insider threats In addition, it may be easier to enforce security via contracts with online services providers than via internal controls. Due to the openness and multi-tenant characteristic of the cloud, cloud computing is bringing tremendous impact on information security field [2].

Availability concerns center on critical applications and data being available. Well-publicized incidents of cloud outages include Gmail. As with the Traditional Security concerns, cloud providers argue that their server uptime compares well with the avail-ability of the cloud users own data centers. Cloud services are thought of as providing more availability, but perhaps not there are more single points of failure and attack. Third-party data control the legal implications of data and applications being did by a third party are complex and not well understood. There is also a potential lack of control and transparency when a third party holds the data. Part of the hype of cloud computing is that the cloud can be implementation independent, but in reality regulatory compliance requires transparency into the cloud [5][6].

1.1.4 CHARACTERISTICS OF CLOUD COMPUTING

Cloud services exhibitive essential characteristics that demonstrate their relation to, and differences from, traditional computing approaches: [6]

On-demand self-service - A consumer can unilaterally provision computing capabilities such as server time and network storage as needed automatically, without requiring human interaction with a service provider.

Broad network access - Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs) as well as other traditional or cloud based software services.


Resource pooling - The providers computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a degree of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources, but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).

Rapid elasticity - Capabilities can be rapidly and elastically provisioned in some cases automatically to quickly scale out; and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

1.2 Need For Study
The clients concern about data security, data integrity, and sharing data with specific band of men and women must be addressed. You can find multiple means of achieving this, example encrypting data on client machine and then storing the information to cloud storage server, computing hash of the information on client machine and storing hash of data in client machine, client trying out the responsibility of sharing the trick key about encryption with specific band of people. Therefore it becomes more tedious for client to keep these information and share such information, more over in the event the device which stores such information is lost or stolen it pose a threat to the total data. Another way could be same storage cloud provider providing the service for secured sharing, hashing, encryption/decryption, but since administrative can have use of both services for maintenance, the security service provided by the cloud storage provider, the information might be compromised. The aforementioned approaches burdens the client by which makes it additionally accountable for securing it data before storing it to the cloud storage.
1.3 Objectives of the study
Our objective is to build a security service which will be provided with a trusted third party, Detailing it further:
1.To develop a framework that will maintain the confidentiality of the users’ data.
2.To build a client application which would use the above framework while  uploading/downloading/deleting the data to and from cloud.

CHAPTER 2

LITERATURE REVIEW

          The cloud computing uses the internet as the communication media. When we look at the security of data in the cloud computing, the vendor has to provide some assurance in service level agreements (SLA) to convince the customer on security issues
Security can be provided in two ways
(1) Hardware security (using TPM)
(2) Software security (using Key approach)
A Trusted Platform module provides secure asymmetric key generation. This paper [11] describes the use of a secure key generating authority in shamir identity based signature scheme implementation. They proposed an idea of identity-based asymmetric cryptosystems (IBC) and together with an identity based asymmetric signature. The proposed IBS scheme in this paper has itself proven secure against forgery under chosen message attacks. This paper also proposed a new concept that assigning TPM as key generating authority and list out the various benefits of implementing it. The paper [12] initially identifies the challenges for establishing the trust in the cloud and then proposes a secure framework which helps in addressing those identified challenges. This paper is actually extension of their previous work. In their previous work they proposed a unique framework for establishing trust in the cloud environment by extending their previous work the current paper addresses those issue it clearly covers applications data and their integration with infrastructure management data. The proposed framework [2] has four types of software agents, each run on trusted devices. The paper also explains about the controlled content sharing between devices.
In [13] Security is ensured using C-code-like formal modeling at application level. As a result of this approach, security of the protocol is ensured not only at the abstract level of protocol l, but also at the concrete level. In [14] the authors proposes the virtualization of trusted platform module, so than not only single machine can use the tpm but also any number of virtual machines can also use the TPM, doing so will support higher level services like remote attestation and so on. Various challenges are addressed in the following table.
                                  

Table 2.1 Challenges in TPM

Secret sharing was invented independently by Adi Shamir [22] and George Blakley [23] in 1979, based on Lagrange interpolating and liner project geometry  

respectively. A secret sharing scheme consists of one dealer, n participants (or players), an original secret, a secret distribution algorithm and a secret reconstruction algorithm. The dealer shares a secret among a given set of n participants, such that every k of those participants (k≤n) could reconstruct the secret by their shares together, while any group of fewer than k participants gets no information about secret  [21-23]. Take Shamir’s scheme for instance, in this scheme, any t out of n shares may be used to recover the secret. The system relies on the idea that 2 points are sufficient to define a line, 3 points are sufficient to define a parabola, 4 points to define a cubic curve and so forth. That is, it takes k points to define a polynomial of degree (k-1). The method is to create a polynomial of degree k-1 with the secret as the first coefficient and the remaining coefficients are picked at random. Next find n points on the curve and give one to each of the players. When at least t out of the n players reveals their points, there is sufficient information to fit a (k-1) degree polynomial to them, the first coefficient being the secret. At the very beginning, the implicit assumption in the original formulation of secret sharing scheme is that each participant is either honest or corrupt, and honest participants are all willing to cooperate when dealer requests reconstruction of the secret. Such schemes include Shamir’s threshold secret sharing scheme [21] based on polynomial interpolation, Blakley’s geometric threshold secret sharing scheme [22], Mignotte’s scheme [24] and Asmuth-Bloom’s scheme [25] based on the Chinese remainder theorem in 1983. Gradually, researchers began to consider the security problems of secret sharing itself, such as how to verify dishonest players. HARN proposed a threshold secret sharing scheme [26] based on digital signature with certification functions in 1995. HSU and WU’s scheme [27] improved HARN’s scheme and proposed an encryption secret sharing scheme based on discrete logarithms with more efficiency for signature verification. Scheme of Han et al [28] used a certification center to verify dishonest shares. Marsh and Schneider designed a secret sharing system [29] which was fault-tolerant and attack-tolerant. All of these made the research on secret sharing scheme more and more mature. Works [30][31][32] were also secret sharing schemes with verification mechanisms. Starting with the work of Halpern and Teague [33], participants in secret sharing are neither honest nor corrupt but are instead viewed as rational and are assumed to act in their own self-interest. Rational secret sharing is a problem at the intersection of cryptography and game theory. In essence, a dealer wishes to engineer a communication game that, when rationally played, guarantees that each of the players learns the dealer’s secret. Yet, all solutions proposed so far did not rely solely on the players’ rationality, but also on their beliefs, and were also quite inefficient. Micali and Shelat exhibited a very efficient and purely rational solution to it with a verifiable trusted channel in their scheme [14]. Fuchsbauer et al also proposed a new methodology for rational secret sharing leading to various instantiations in both the two-party and multi-party settings [15]. Of course, because of its generality, secret sharing schemes step into many fields, such as image secret sharing [16][17]. More and more information security applications consider secret sharing scheme to protect confidential key data, and verifiable and proactive sharing mechanisms are hotly studied [18-21].
secret sharing scheme can broadly categorized as follows:
1. Traditional Secret sharing scheme
2. Threshold Secret sharing scheme
3. Threshold Changeable Secret sharing scheme
4. Verifiable Secret sharing scheme
A Secret sharing is a technique for protecting sensitive data, such as cryptographic keys. It is used to distribute a secret value to a number of parts or shares that have to be combined together to access the original value. These shares can then be given to separate parties that protect them using standard means like memorize, store in a computer etc. Secret sharing is used in modern cryptography to lower the risks associated with compromised data. Sharing a secret spreads the risk of compromising the value across several parties [27].
 Traditional Secret Sharing Scheme
Shamir [21] presented the first secret sharing method in1979. Secret sharing involves transmitting different shares in different channels. With a single share nobody can see the entire secret message. The general idea behind secret sharing is to distribute a secret to n different participants so that any k participants can reconstruct the secret, and any (k - 1) or less participants cannot reveal anything about the secret. Such schemes are also known as (k, n) threshold-based scheme. For any secret sharing schemes it has the following two processes: Distribution Process This process input is the secret k that gets portioned into n number of shares S1, S2,...Sn that is privately delivered to the participants. Reconstruction Process It reconstructs the secret when a suitable set of shares is present using a certain algorithm.
Threshold Secret Sharing Scheme:
These schemes are the first kind of schemes that were constructed individually by both Shamir who uses polynomial interpolation [21] and Blakley who uses finite geometry [28]. To share a secret we can split the secret and spread the pieces to all participants. In some schemes, reconstructing the secret needs combining all shares from participants, but this might not be practical since we might need the secret reconstructed by some of the participants and not all. The reason is as follows: Imagine if a country splits the access codes for its missiles among three officials and they found themselves in a dire need to access the missiles, but one of the officials is not present or he simply refuses to attack. Then, we need to have a different scheme where a subset of the participants can reconstruct the secret. These schemes are secure and do not require all n shares [24]. For example: Consider the Board of Directors of Defense ministry would like to protect missiles secret formula. The president of the company should be able to access the formula when needed, but in an emergency any 3 of the 12 board members would be able to unlock the secret formula together. This can be accomplished by a secret sharing scheme with k = 3 and n = 12, where 3 shares are given to the president, and is given to each board member. These schemes are further classified depending in the size on the shares. Perfect Secret Sharing (PSS) These schemes cannot allow the size of secret shares to become smaller than the size of the secret.Ramp Secret Sharing (RSS) These schemes achieve the goal of reducing the size of the shares, but at the cost of some degraded protection on the secret.
Threshold Changeable Secret Sharing Scheme
Threshold Changeable secret sharing scheme was invented by the scientist Wang and Wong for changing thresholds in the absence of secure channels after the setup of threshold secret sharing schemes[24]. Initially, we construct a perfect (t, n) threshold scheme that is threshold changeable to t which is optimal with respect to the share size. But these threshold changeable schemes along with most previously known schemes turn out to be insecure under the collusion attack of players holding initial shares [29].
Verifiable Secret Sharing Scheme
This scheme was first introduced to overcome the problem of dishonest dealers. VSS schemes lets the participants verify that their shares are consistent, thus they can properly reconstruct the secret. To get a clear idea about how these schemes work, let us assume a dealer Trent sends shares to Alice, Bob, Carol and Dave. The only way they can be sure they have a valid share is to reconstruct the secret, but it may happen that Trent sent a bogus share to Bob or Bob received a bad share as a result of a communication error. VSS schemes allow these participants to validate their shares without the need to reconstruct the secret [25]. It is designed to resist an adversary who can corrupt the dealer and some of the participants.VSS requires an additional algorithm called verify that allows participants to verify their shares before any reconstructing attempts [26].



PROBLEM DEFINITION

          Cloud service providers request customers to store their account information in the cloud, cloud service providers have the access to these information. This presents a privacy issue to the customer’s privacy information. Many SLAs have specified the privacy of the sensitive information, however, it is difficult for customers to make sure the proper rules are enforced. There is a lack of transparency in the cloud that allows the customers to monitor their own privacy information. When a customer decide to use multiple cloud service, the customer will have to store his/her password in multiple cloud, the more cloud service the customer is subscript to, the more copy of the user’s information will be. This is a security issue for the customers and the cloud service providers. For every cloud service, the customer needs to exchange his/her authentication information. This redundant actions may lead to an exploit of the authentication mechanism. Cloud service providers use different authentication technologies for authenticating users, this may have less impact on SaaS than PaaS and IaaS, but it is present a challenge to the customers. Inspite of all authenticating mechanisms provided by the cloud service providers, the customers privacy is highly lost while storing data in their place. In our project we are providing one secure mechanism in which the cloud service providers cannot access the customers data without the knowledge of the customers. If the cloud service providers want to access the customers data mutual authentication between the customers and cloud service providers. we are introducing third party authority who generates a long key and shares the key with the customers and the cloud service providers.  We are addressing the solution for the privacy issue that exists with the customers.
CHAPTER 4

PROPOSED METHOD


          Cloud service providers request customers to store their account information in the cloud, cloud service providers have the access to these information. This presents a privacy issue to the customer’s privacy information. Many SLAs have specified the privacy of the sensitive information, however, it is difficult for customers to make sure the proper rules are enforced. There is a lack of transparency in the cloud that allows the customers to monitor their own privacy information. We are introducing the third party which generates a long secret key  and shares the half part of the long key to the cloud service providers and remaining part of the half key to the customers. Customers once registers for a new account they can login into the system using their account. Once after login into the account before performing any operations on file they first request for the third party to generate a long key. Third party as requested by the customers generates a long key and shares their corresponding share to cloud service providers and customers respectively. Now when the cloud service providers tries to access the file without the knowledge of their customers, immediately an mail should be sent to the customers indicating that CSP is trying to access their data. If suppose customers want to share their data to the CSP they can share their half part of the key to the CSP. CSP inturn combines his half key with the customers half  key which gives full key which should be verified by the third party. Now the cloud service provider can access the customers file. In our proposed system we are providing privacy to customers data assuring that the customers data cannot be viewed without their knowledge.

But there is one drawback in this scenario once he got users key he can view the users file anytime without the users knowledge  To overcome this difficulty we are setting default logout time for admin and also user once he shares his key with the admin in the certain interval of time his half portion of the key gets updated  so that even admin has the key obtained from the user it is invalid now. if he want to see the users file he has to again request the key from the user.
In our proposed system we are going to implement modified shamir's secret sharing algorithm to generate a secret key and shares the key to admin and user side .The following are the list of modules and their functionalities
4.2 Modules:
4.2.1 User side:
4.2.1.1 Data upload scenario
1. The end user registers for new account
2. The end user login to the system with his/her username & password.
3. Once the user is authenticated, separate folder has created for the user which monitors all the activities of the user
4. before a user upload the files he has to request for the secret key from the third party
5. once the user receives the key from the third party he can upload the files to the cloud
4.2.1.2 Data download scenario
1. The end user registers for new account
2. The end user login to the system with his/her username & password.
3. Once the user is authenticated, separate folder has created for the user which monitors all the activities of the user
4. once the user receives the key from the third party he can download the files from the cloud
4.2.1.3 Data delete scenario
1. The end user registers for new account
2. The end user login to the system with his/her username & password.
3. Once the user is authenticated, separate folder has created for the user which monitors all the activities of the user
4. once the user receives the key from the third party he can delete the files from the cloud


4.2.2 Admin side
when the admin wants to see the other user files he has to do the following steps
1. Admin enters his half key which is provided by the third party
2. Now he has to request from the user the remaining half key so that by combining his half key and admin half key to view the file
3. once the user wants to share his file to the admin they can provide their half key to admin and grants the admin to view their file
CHAPTER 5

EXPERIMENTAL RESULTS

5.1. Hardware/Software requirements
In order to implement we need
1.. Glassfish Server : to host web service
2. Java 1.6.
3. MySQL 5.2.
5.2 Typical Existing Scenarios:
5.2.1 User side
5.2.1.1 Data upload scenario
1. The end user registers for new account
2. The end user login to the system with his/her username & password.
3. Once the user is authenticated, separate folder has created for the user which monitors all the activities of the user
4. Now a user can upload the files and store it to cloud
5.2.1.2 Data download scenario
1. The end user registers for new account
2. The end user login to the system with his/her username & password.
3. Once the user is authenticated, separate folder has created for the user which monitors all the activities of the user
4. Now a user can download the files from the  cloud
5.2.1.3 Data Delete scenario
1. The end user registers for new account
2. The end user login to the system with his/her username & password.
3. Once the user is authenticated, separate folder has created for the user which monitors all the activities of the user
4. Now a user can delete the files from the cloud
5.2.2 Admin side
1. Admin login into the system who has the full control over the system
2.Admin can view all files which is deleted by other users
3. Admin can select whichever file he wants to recover .he can recover all the files deleted by other users



Existing shamir secret key Algorithm:

Divide data D into n pieces D1,D2….Dn  in such a way that:
Knowledge of any  k or more D pieces makes  D easily computable.
Knowledge of any  k -1 or  fewer pieces leaves  D completely undetermined 
This scheme is called (k,n)  threshold scheme.
If  k=n then all participants are required together to reconstruct the secret
Choose at random (k-1) coefficients  a1,a2,a3…ak-1 , and let S be the a0





CHAPTER 6

CONCLUSION AND FUTURE WORK

Conclusion
Security is an important issue in any application. To provide the security, authentication plays very important role. Authentication is provided through the secret sharing schemes. Cloud service providers request customers to store their account information in the cloud, cloud service providers have the access to these information. This presents a privacy issue to the customer’s privacy information. When a customer decide to use multiple cloud service, the customer will have to store his/her password in multiple cloud, the more cloud service the customer is subscript to, the more copy of the user’s information will be. This is a security issue for the customers and the cloud service providers. our main idea is to secure the users data from the cloud service provider
Future Work
we are implementing a secure framework by proposing a new  scheme in our future work so that without the knowledge of the user the cloud service providers cannot access the customers data though it is saved in the CSP place

APPENDIX A

SCREENSHOTS