Share This Paper. Figures, Tables, and Topics from this paper. Figures and Tables. Explore Further: Topics Discussed in This Paper Digital watermarking Multiplicative noise Resampling statistics Filter signal processing Noise reduction Utility functions on indivisible goods Frequency-hopping spread spectrum Emoticon Benchmark computing Wow and flutter measurement. Citations Publications citing this paper. Evolutionary hidden information detection by granulation-based fitness approximation Mohsen Davarynejad , C.
Ahn , Jos L. Vrancken , Jan van den Berg , Carlos A. Coello Coello. Cheng , Fei Han , M. Tung , Kecheng Xu. Robust audio watermarking based on multi-carrier modulation Mohammed Khalil , Abdellah Adib. Deploying metaheuristics for global optimization Mohsen Davarynejad. A simplified watermarking algorithm based on lifting wavelet transform Satendra Pal Singh , Gaurav Bhatnagar.
References Publications referenced by this paper. A dual watermarking and fingerprinting system Darko Kirovski , Henrique S. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error. We present a new method of steganalysis, the detection of hidden messages, for least significant bits LSB replacement embedding. The method uses lossless image compression algorithms to model images bitplane by bitplane.
The basic premise is that messages hidden by replacing LSBs of image pixels do not possess the same statistical properties and are therefore likely to be incompressible by compressors designed for images. In fact, the hidden data are usually compressed files themselves that may or may not be encrypted.
In either case, the hidden messages are incompressible. In this work, we study three image compressors, one a standard and two we developed. The results are that many images can be eliminated as having possible steganographic content since the LSBs compress more than a hidden message typically would. Digital fingerprinting, watermarking, and tracking technologies have gained importance in the recent years in response to growing problems such as digital copyright infringement. While fingerprints and watermarks can be generated in many different ways, use of natural language processing for these purposes has so far been limited.
Measuring similarity of literary works for automatic copyright infringement detection requires identifying and comparing creative expression of content in documents. In this paper, we present a linguistic approach to automatically fingerprinting novels based on their expression of content. We use natural language processing techniques to generate "expression fingerprints". These fingerprints consist of both syntactic and semantic elements of language, i.
Our experiments indicate that syntactic and semantic elements of expression enable accurate identification of novels and their paraphrases, providing a significant improvement over techniques used in text classification literature for automatic copy recognition. We show that these elements of expression can be used to fingerprint, label, or watermark works; they represent features that are essential to the character of works and that remain fairly consistent in the works even when works are paraphrased.
These features can be directly extracted from the contents of the works on demand and can be used to recognize works that would not be correctly identified either in the absence of pre-existing labels or by verbatim-copy detectors. Attacks on lexical natural language steganography systems Author s : Cuneyt M.
Delp Show Abstract. Text data forms the largest bulk of digital data that people encounter and exchange daily. For this reason the potential usage of text data as a covert channel for secret communication is an imminent concern. Even though information hiding into natural language text has started to attract great interest, there has been no study on attacks against these applications. In this paper we examine the robustness of lexical steganography systems. In this paper we used a universal steganalysis method based on language models and support vector machines to differentiate sentences modified by a lexical steganography algorithm from unmodified sentences.
The experimental accuracy of our method on classification of steganographically modified sentences was On classification of isolated sentences we obtained a high recall rate whereas the precision was low. Atallah Show Abstract. This paper gives an overview of the research and implementation challenges we encountered in building an end-to-end natural language processing based watermarking system. With natural language watermarking, we mean embedding the watermark into a text document, using the natural language components as the carrier, in such a way that the modifications are imperceptible to the readers and the embedded information is robust against possible attacks.
Of particular interest is using the structure of the sentences in natural language text in order to insert the watermark. We evaluated the quality of the watermarked text using an objective evaluation metric, the BLEU score. BLEU scoring is commonly used in the statistical machine translation community. Our current system prototype achieves 0. Shterev ; Reginald L. Lagendijk Show Abstract. This paper presents a scheme for estimating two-band amplitude scale attack within a quantization-based watermarking context. Quantization-based watermarking schemes comprise a class of watermarking schemes that achieves the channel capacity in terms of additive noise attacks.
First we derive the probability density function PDF of the attacked data. Second, using a simplified approximation of the PDF model, we derive a Maximum Likelihood ML procedure for estimating two-band amplitude scaling factor. Finally, experiments are performed with synthetic and real audio signals showing the good performance of the proposed estimation technique under realistic conditions.
We propose an extension of RDM to construct a scheme that is robust to arbitrary linear time-invariant filtering attacks, as opposed to standard Dither Modulation DM which we show to be extremely sensitive to those attacks. We illustrate the feasibility of DFT-RDM by passing the watermarked signal through an implementation of a graphic equalizer: the average error probability is small enough to justify the feasibility of adding a coding with interleaving layer to DFT-RDM.
Two easily implementable improvements are discussed: windowing and spreading. In particular, the latter is shown to lead to very large gains. Customer identification watermarking today is one of the most promising application domains of digital watermarking. It enables to identify individual copies of otherwise indistinguishable digital copies.
If done without any precautions, those individual watermarking are vulnerable to a number of specialized attacks based on an attacker collecting more than one individual copy. Fingerprinting algorithms are used to create watermarks robust against these attacks, but the resulting watermarks require a high payload of the watermarking algorithm. As soon as a large number of copies need to be distinguished and more than two copies are available to the attacker, the watermarks are too long to be embedded with current algorithms.
We present a novel alternative method to fight attacks aimed at individual customer identification watermarks. This is achieved by modifying the watermarked material in a way collusion attacks produce artifacts which significantly reduce the perceived quality while they do not affect the quality of the individual copies.
Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spread spectrum-based schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to tamper other watermarking algorithms, as those which use side-information.
Furthermore the sensitivity attack has never been used to obtain falsely watermarked contents, also known as forgeries. In this paper a new version of the sensitivity attack based on a general formulation is proposed; this method does not require any knowledge about the detection function nor any other system parameter, but just the binary output of the detector, thus being suitable for attacking most known watermarking methods, both for tampering watermarked signals and obtaining forgeries.
The soundness of this new approach is tested by empirical results. Shi; Jiwu Huang Show Abstract. The ambiguity attack is to derive a valid watermark from a medium to defeat the ownership claim of the real owner. Most of the research suggests that it is difficult to design a provably secure non-ambiguity watermarking without a trusted third party. Recently, Li and Chang have provided a specific blind additive spread spectrum watermarking scheme as an example that is provably non-ambiguous.
In this paper, a framework for quantization based watermarking schemes and non-blind spread spectrum watermarking scheme to achieve non-ambiguity is proposed. As a result, many of the existent watermarking schemes can achieve provable non-invertibility via using this framework, and an nonambiguity ownership verification protocol without a trusted third party may be constructed.
We have obtained the close form solution of false positive rate for the underlying quantization based schemes and spread spectrum watermarking schemes both blind and non-blind. In addition, the required length of watermark becomes much shorter than that required in the Li and Chang's scheme. One of the important stages of fingerprint recognition is the registration of the fingerprints with respect to the original template.
This is not a straightforward task as fingerprint images may have been subject to rotations and translations. Popular techniques for fingerprint registration use a reference point to achieve alignment. In this paper, we propose a new approach for rotation invariant and reliable reference point detection applicable to fingerprints of different quality and types. Our approach is based on the integration of a directional vector field representing the doubled ridge orientations in fingerprints over a closed contour. We define the reference point as the point of the highest curvature.
Areas of high curvature in the fingerprint are characterized by large differences in the orientations and correspond to high curvatures in the directional vector fields. Closed contour integrals of orientation vector field, defined as above, over a circle centered around the reference point corresponds to maximal closed curve integrals, and the values associated with such integrals are rotation invariant.
Experimental results prove that with the proposed approach we can locate the reference point with high accuracy. Comparison with existing methods is provided. In this article, methods for user recognition by online handwriting are experimentally analyzed using a combination of demographic data of users in relation to their handwriting habits.
Online handwriting as a biometric method is characterized by having high variations of characteristics that influences the reliance and security of this method. These variations have not been researched in detail so far. Especially in cross-cultural application it is urgent to reveal the impact of personal background to security aspects in biometrics. Metadata represent the background of writers, by introducing cultural, biological and conditional changing aspects like fist language, country of origin, gender, handedness, experiences the influence handwriting and language skills.
The goal is the revelation of intercultural impacts on handwriting in order to achieve higher security in biometrical systems. In our experiments, in order to achieve a relatively high coverage, 48 different handwriting tasks have been accomplished by 47 users from three countries Germany, India and Italy have been investigated with respect to the relations of metadata and biometric recognition performance. For this purpose, hypotheses have been formulated and have been evaluated using the measurement of well-known recognition error rates from biometrics.
The evaluation addressed both: system reliance and security threads by skilled forgeries. For the later purpose, a novel forgery type is introduced, which applies the personal metadata to security aspects and includes new methods of security tests. Finally in our paper, we formulate recommendations for specific user groups and handwriting samples.
In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis.
Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset. Akkermans; Fei Zuo Show Abstract.
- Cryptanalysis of Discrete-Sequence Spread Spectrum Watermarks | SpringerLink.
- 2010 – today.
- Radio Frequency Integrated Circuit Design (Artech House Microwave Library).
- Unit 4 -Wireless communication & mobile programming.
- Highly robust watermarking scheme based on surrounding mean value relationship.
- Professional Windows Embedded Compact 7.
- Search Results.
In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system HDS based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers.
The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.
Biometric person authentication has been attracting considerable attention in recent years. Conventional biometric person authentication systems, however, simply store each user's template as-is on the system. If registered templates are not properly protected, the risk arises of template leakage to a third party and impersonation using biometric data restored from a template. We propose a technique that partially deletes and splits template information so as to prevent template restoration using only registered template information while enabling restoration for only that template's owner using error-correcting code.
This technique can be applied to general biometric authentication systems.
In this paper, we introduce this technique and evaluate template security with it by simulating a speaker verification system. On the comparison of audio fingerprints for extracting quality parameters of compressed audio Author s : P. Doets ; M. Menor Gisbert; R. Audio fingerprints can be seen as hashes of the perceptual content of an audio excerpt. Applications include linking metadata to unlabeled audio, watermark support, and broadcast monitoring. Existing systems identify a song by comparing its fingerprint to pre-computed fingerprints in a database.
Small changes of the audio induce small differences in the fingerprint. The song is identified if these fingerprint differences are small enough. In addition, we found that distances between fingerprints of the original and a compressed version can be used to estimate the quality bitrate of the compressed version. In this paper, we study the relationship between compression bit-rate and fingerprint differences. We present a comparative study of the response to compression using three fingerprint algorithms each representative for a larger set of algorithms , developed at Philips, Polytechnic University of Milan, and Microsoft, respectively.
We have conducted experiments both using the original algorithms and using versions modified to achieve similar operation conditions, i. Our study shows similar behavior for these three algorithms. Wow, or time warping caused by speed fluctuations in analog audio equipment, provides a wealth of applications in watermarking. Very subtle temporal distortion has been used to defeat watermarks, and as components in watermarking systems. In the image domain, the analogous warping of an image's canvas has been used both to defeat watermarks and also proposed to prevent collusion attacks on fingerprinting systems.
In this paper, we explore how subliminal levels of wow can be used for steganography and fingerprinting. We present both a low-bitrate robust solution and a higher-bitrate solution intended for steganographic communication. As already observed, such a fingerprinting algorithm naturally discourages collusion by averaging, owing to flanging effects when misaligned audio is averaged.
Another advantage of warping is that even when imperceptible, it can be beyond the reach of compression algorithms. We use this opportunity to debunk the common misconception that steganography is impossible under "perfect compression. It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified.
It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.
In this paper, we construct blind steganalyzers for JPEG images capable of assigning stego images to known steganographic programs. Most of these features are calculated directly from the quantized DCT coefficients as their first order and higher-order statistics. The features for cover images and stego images embedded with three different relative message lengths are then used for supervised training.
We use a support vector machine SVM with Gaussian kernel to construct a set of binary classifiers. Although the main bulk of results is for single compressed stego images, we also report some preliminary results for double-compressed images created using F5 and OutGuess. This paper demonstrates that it is possible to reliably classify stego images to their embedding techniques. Moreover, this approach shows promising results for tackling the diffcult case of double compressed images. MPSteg: hiding a message in the matching pursuit domain Author s : G.
Cancelli; M. Barni ; G. Menegaz Show Abstract. In this paper we propose a new steganographic algorithm based on Matching Pursuit image decomposition. Many modern approaches to detect the presence of hidden messages are based on statistical analysis, preferably on the analysis of higher-order statistical regularities.
The idea behind this work is to adaptively choose the elements of a redundant basis to represent the host image.
Information Hiding and Networking
In this way, the image is expressed as the composition of a set of structured elements resembling basic image structures such as lines, corners, and flat regions. We argue that embedding the watermark at this, more semantic, level results in a lower modification of the low-level statistical properties of the image, and hence in a lower detectability of the presence of the hidden message.
Stego sensitivity measure and multibit plane based steganography using different color models Author s : Sos S. There are several steganographic methods that embed in palette-based images. In general these schemes are using RGB palette models. The restrictions of palette-based image formats impose limitations on existing models.
For example, how to divide colors from a palette-vector for embedding purposes without causing visual degradation to the image. Another crucial intricacy is embedding using multiple bit planes while preserving the image's characteristics.
Possible solutions to these problems could be: a using a multi-bit embedding procedure; b using other color models and c embedding only in non-informative regions. Therefore we present a new secure high capacity palette based steganographic method used to embed in multiple bit planes using different color models.https://outhasira.ga
Figure 15 from Spread-spectrum watermarking of audio signals - Semantic Scholar
The performance of the developed algorithm posts the following advantages shown through computer simulations: 1 Fewer modifications are present when compared to BPCS Steganographic method for palette-based images . The proposed method was proven to be immune to Chi-square and Pairs Analysis steganalysis attacks. In addition, the presented method uses different color model to represent the palettes. Analysis shows that the presented algorithm was also secure against detection from RS Steganalysis when using different color models. Piva ; V. Cappellini; D. Corazzi; A. De Rosa; C. Orlandi; M.
Barni Show Abstract. Recently the research in the watermarking field has concentrated its attention to the security aspects. In a watermarking application one of the most sensitive steps from the point of view of security, is the watermark extraction process: here, a prover has to prove to a verifier that a given watermark is present into the content. In the design of the system, it has to be considered that the prover is not a trusted party: the prover could try to exploit the knowledge acquired during watermark extraction to remove the embedded code and, consequently, to undermine the security of the watermarking system.
To tackle this particular issue, it has been proposed to use some cryptographic techniques, defined zero-knowledge protocols, for building a secure layer on top of the watermarking channel, able to protect the watermarking algorithm against a possible information leakage.
Cryptanalysis of Discrete-Sequence Spread Spectrum Watermarks
Up till now, zero-knowledge protocols have been applied to spread-spectrum based detectable watermarking algorithms. In this paper, a novel zero-knowledge protocol designed for a Spread Transform Dither Modulation ST-DM watermarking algorithm, belonging to the class of the informed watermarking systems, is proposed. Compression and rotation resistant watermark using a circular chirp structure Author s : Christopher E. Fleming ; Bijan G. Mobasseri Show Abstract. Digital watermarks for images can be made relatively robust to luminance and chrominance changes.
In this work we use an additive watermarking model, commonly used in spread spectrum, using a new spreading function. The spreading function is a 2D circular chirp that can simultaneously resist JPEG compression and image rotation. Circular chirp is derived from a block chirp by polar mapping.
The resistance to compression is achieved by the available tuning parameters of a block chirp. Tuning parameters include the chirp's initial frequency and chirp rate. These two parameters can be used to perform spectral shaping to avoid JPEG compression effects. Rotational invariance is achieved by mapping the block chirp to a ring whose inner and outer diameters are selectable.
The watermark is added in spatial domain but detection is performed in polar domain where rotation translates to translation. Using electronic watermarks as copyright protection for still images requires robustness against geometrical attacks. In this paper we propose a watermarking scheme that is robust to rotation and scaling distortions. The watermark detection is performed in a 1-D invariant signature whereas the embedding process is performed adding a watermark signal in the DFT domain. This embedding procedure allows the watermarking signal to be shaped in the frequency domain.
This shaping is determined solving a game opposing the watermarker and the attacker. Statistically significant roc curve test results under several attacks are presented. New results on robustness of secure steganography Author s : Mark T. Silvestre Show Abstract. Steganographic embedding is generally guided by two performance constraints at the encoder. Firstly, as is typical in the field of watermarking, all the transmission codewords must conform to an average power constraint. Secondly, for the embedding to be statistically undetectable secure , it is required that the density of the watermarked signal must be equal to the density of the host signal.
Recent work has shown that some common watermarking algorithms can be modified such that both constraints are met. In particular, spread spectrum SS communication can be secured by a specific scaling of the host before embedding. Also, a side informed scheme called stochastic quantization index modulation SQIM , maintains security with the use of an additive stochastic element during the embedding. In this work the performance of both techniques is analysed under the AWGN channel assumption.
It will be seen that the robustness of both schemes is lessened by the steganographic constraints, when compared to the standard algorithms on which they are based. Specifically, the probability of decoding error in the SS technique increases when security is required, and the achievable rate of SQIM is shown to be lower than that of dither modulation on which the scheme is based for a finite alphabet size. Sphere-hardening dither modulation Author s : F. Balado ; N. Hurley; G. Spread-Transform Dither Modulation STDM is a side-informed data hiding method based on the quantization of a linear projection of the host signal.
This projection affords a signal to noise ratio gain which is exploited by Dither Modulation DM in the projected domain. Similarly, it is possible to use to the same end the signal to noise ratio gain afforded by the so-called sphere-hardening effect on the norm of a vector. In this paper we describe the Sphere-hardening Dither Modulation SHDM data hiding method, which is based on the application of DM to the magnitude of a host signal vector, and we give an analysis of its characteristics.
It shown that, in the same sense as STDM can be deemed to be the side-informed counterpart of additive spread spectrum SS with repetition coding, SHDM is the side-informed counterpart of multiplicative SS with repetition. Indeed, we demonstrate that SHDM performs similarly as STDM in front of additive independent distortions, but with the particularity that this is achieved through different quantization regions. The issue of securing SHDM is also studied. In this paper, security of lattice-quantization data hiding is considered under a cryptanalytic point of view.
However, the theoretical analysis shows that the observation of several watermarked signals can provide sufficient information for an attacker willing to estimate the dither signal, quantifying information leakages in different scenarios. The practical algorithms proposed in this paper show that such information leakage may be successfully exploited with manageable complexity, providing accurate estimates of the dither using a small number of observations.
The aim of this work is to highlight the security weaknesses of lattice data hiding schemes whose security relies only on secret dithering. Performance analysis of nonuniform quantization-based data hiding Author s : J. Voloshynovskiy ; O. Koval; T. Pun Show Abstract. In this paper, we tackle the problem of performance improvement of quantization-based data-hiding in the middle-watermark-to-noise ratio WNR regime.
The objective is to define the quantization-based framework that maximizes the performance of the known-host-state data-hiding in the middle-WNR taking into account the host probability density function pdf. The experimental results show that the usage of uniform deadzone quantization UDQ permits to achieve higher performance than using uniform quantization UQ or spread spectrum SS -based data-hiding. The performance enhancement is demonstrated for both achievable rate and error probability criteria. We present a new approach to detection of forgeries in digital images under the assumption that either the camera that took the image is available or other images taken by that camera are available.
Our method is based on detecting the presence of the camera pattern noise, which is a unique stochastic characteristic of imaging sensors, in individual regions in the image. The forged region is determined as the one that lacks the pattern noise. The presence of the noise is established using correlation as in detection of spread spectrum watermarks.
We proposed two approaches. In the first one, the user selects an area for integrity verification. The second method attempts to automatically determine the forged area without assuming any a priori knowledge. The methods are tested both on examples of real forgeries and on non-forged images.
We also investigate how further image processing applied to the forged image, such as lossy compression or filtering, influences our ability to verify image integrity. Digital elevation maps DEMs provide a digital representation of 3-D terrain information. In civilian applications, high-precision DEMs carry a high commercial value owing to the large amount of effort in acquiring them; and in military applications, DEMs are often used to represent critical geospatial information in sensitive operations.
These call for new technologies to prevent unauthorized distribution and to trace traitors in the event of information leak related to DEMs. In this paper, we propose a new digital fingerprinting technique to protect DEM data from illegal re-distribution. The proposed method enables reliable detection of fingerprints from both 3-D DEM data set and its 2-D rendering, whichever format that is available to a detector.
Our method starts with extracting from a DEM a set of critical contours either corresponding to important topographic features of the terrain or having application-dependent importance. Fingerprints are then embedded into these critical contours by employing parametric curve modeling and spread spectrum embedding. Finally, a fingerprinted DEM is constructed to incorporate the marked 2-D contours. Through experimental results, we demonstrate the robustness of the proposed method against a number of challenging attacks applied to either DEMs or their contour representations.
Information embedding and extraction for electrophotographic printing processes Author s : Aravind K. Chiu ; Jan P. Allebach ; Edward J. In today's digital world securing different forms of content is very important in terms of protecting copyright and verifying authenticity.
One example is watermarking of digital audio and images. We believe that a marking scheme analogous to digital watermarking but for documents is very important. In this paper we describe the use of laser amplitude modulation in electrophotographic printers to embed information in a text document. In particular we describe an embedding and detection process which allows the embedding of 1 bit in a single line of text. For a typical 12 point document, 33 bits can be embedded per page.
Users are able to submit any image from a local or an online source to the system and get classification results with confidence scores. Our system has implemented three different algorithms from the state of the art based on the geometry, the wavelet, and the cartoon features. We describe the important algorithmic issues involved for achieving satisfactory performances in both speed and accuracy as well as the capability to handle diverse types of input images. We studied the effects of image size reduction on classification accuracy and speed, and found different size reduction methods worked best for different classification methods.
In addition, we incorporated machine learning techniques, such as fusion and subclass-based bagging, in order to counter the effect of performance degradation caused by image size reduction. Text data-hiding for digital and printed documents: theoretical and practical considerations Author s : R. Koval; J. Vila ; E. Topak ; F. Deguillaume ; Y. Rytsar ; T. In this paper, we propose a new theoretical framework for the data-hiding problem of digital and printed text documents. We explain how this problem can be seen as an instance of the well-known Gel'fand-Pinsker problem.
The main idea for this interpretation is to consider a text character as a data structure consisting of multiple quantifiable features such as shape, position, orientation, size, color, etc. We also introduce color quantization , a new semi-fragile text data-hiding method that is fully automatable, has high information embedding rate, and can be applied to both digital and printed text documents. The main idea of this method is to quantize the color or luminance intensity of each character in such a manner that the human visual system is not able to distinguish between the original and quantized characters, but it can be easily performed by a specialized reader machine.
We also describe halftone quantization, a related method that applies mainly to printed text documents. Since these methods may not be completely robust to printing and scanning, an outer coding layer is proposed to solve this issue. Finally, we describe a practical implementation of the color quantization method and present experimental results for comparison with other existing methods. E-capacity analysis of data-hiding channels with geometrical attacks Author s : E. Topak ; S. Koval; M. Haroutunian; J.
In a data hiding communications scenario, geometrical attacks lead to a loss of reliable communications due to synchronization problems when the applied attack is unknown. In our previous work, information-theoretic analysis of this problem was performed for theoretic setups, i. Assuming that the applied geometrical attack belongs to a set of finite cardinality, it is demonstrated that it does not asymptotically affect the achievable rate in comparison to the scenario without any attack.
The main goal of this paper is to investigate the upper and lower bounds on the rate reliability function that can be achieved in the data hiding channel with some geometrical state. In particular, we investigate the random coding and sphere packing bounds in channels with random parameter for the case when the interference channel state is not taken into account at the encoder.
Furthermore, only those geometrical transformations that preserve the input dimensionality and input type class are considered. For this case we are showing that similar conclusion obtained in the asymptotic case is valid, meaning that within the class of considered geometrical attacks the rate reliability function is bounded in the same way as in the case with no geometrical distortions.
We present an image data-hiding scheme based on near-capacity dirty-paper codes. The scheme achieves high embedding rates by "hiding" information into mid-frequency DCT coefficients among each DCT block of the host image. To reduce the perceptual distortion due to data-hiding, the mid-frequency DCT coefficients are first perceptually scaled according to Watson's model. Robustness tests against different attacks, such as low-pass filtering, image scaling, and lossy compression, show that our scheme is a good candidate for high-rate image data-hiding applications.
Construction of steganographic schemes in which the sender and the receiver do not share the knowledge about the location of embedding changes requires wet paper codes. Steganography with non-shared selection channels empowers the sender as now he is able to embed secret data by utilizing arbitrary side information, including a high-resolution version of the cover object perturbed quantization steganography , local properties of the cover adaptive steganography , and even pure randomness, e.
In this paper, we propose a new approach to wet paper codes using random linear codes of small codimension that at the same time improves the embedding efficiency-the number of message bits embedded per embedding change. We describe a practical algorithm, test its performance experimentally, and compare the results to theoretically achievable bounds.
We point out an interesting ripple phenomenon that should be taken into account by practitioners. The proposed coding method can be modularly combined with most steganographic schemes to allow them to use non-shared selection channels and, at the same time, improve their security by decreasing the number of embedding changes.
Successful watermarking algorithms have already been developed for various applications ranging from meta-data tagging to forensic tracking. Nevertheless, it is commendable to develop alternative watermarking techniques that provide a broader basis for meeting emerging services, usage models and security threats. To this end, we propose a new multiplicative watermarking technique for video, which is based on the principles of our successful MASK audio watermark. Audio -MASK has embedded the watermark by modulating the short-time envelope of the audio signal and performed detection using a simple envelope detector followed by a SPOMF symmetrical phase-only matched filter.
Video -MASK takes a similar approach and modulates the image luminance envelope. In addition, it incorporates a simple model to account for the luminance sensitivity of the HVS human visual system. Preliminary tests show algorithms transparency and robustness to lossy compression. Selective encryption for H.
Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing.
Related Cryptanalysis of Discrete-Sequence Spread Spectrum Watermarks
Copyright 2019 - All Right Reserved