Prosecution Insights
Last updated: April 19, 2026
Application No. 18/091,940

METHOD AND APPARATUS FOR CONVERTING IMAGE USING QUANTUM CIRCUIT

Final Rejection §103
Filed
Dec 30, 2022
Examiner
LEE, MICHAEL CHRISTOPHER
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
80 granted / 136 resolved
+3.8% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
54 currently pending
Career history
190
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s Amendment and remarks dated 1/9/2026 have been considered. Claims 4 and 13 are cancelled and claim 19 is newly-added. Claims 1-3, 5-12, and 14-19 are pending. Drawing Objections. The objection to Fig. 6 is withdrawn in view of the amendments to para. 0073 of the specification to identify reference characters 607 and 608. The objection to Fig. 7 is withdrawn in view of the substitute Fig. 7 provided by Applicant. Abstract Objections. The objection to the abstract is withdrawn in view of the corrected Abstract provided by Applicant which corrects the misspelling of “Haar.” Specification Objections. The objections to paras. 0004, 0006, 0015, 0048, 00107 are withdrawn in view of the corrections made to such paragraphs. Claim Objections. The objections to claims 1, 10, and 16 are withdrawn in view of Applicant’s amendments to such claims. 35 U.S.C. 112(f) Interpretation. The interpretation of certain limitations as means-plus-function under 35 U.S.C. 112(f) in claims 10-18 is withdrawn in view of Applicant’s amendments to independent claim 10. Response to Arguments On page 15 of Applicant’s 1/9/2026 Amendment and remarks, Applicant asserts that at least Fig. 4 and para. 0055 of the instant specification provide sufficient written description support for the amendments to independent claims 1 and 10. The examiner agrees that the portions of the disclosure identified by Applicant, together with at least para. 0022 of the instant specification, provide sufficient written description support for the amendments to independent claims 1 and 10. On page 18 of Applicant’s 1/9/2026 Amendment and remarks, with respect to the rejection of claim 1 under 35 U.S.C. 103, Applicant argues with respect to the SU reference: PNG media_image1.png 508 664 media_image1.png Greyscale Applicant’s argument and amendments are persuasive. The previous rejection of claim 1 under 35 U.S.C. 103 is withdrawn. However, Applicant’s amendments to claim 1 necessitate a new ground of rejection under 35 U.S.C. 103 using at least the SZADY reference as explained in the detailed rejections below. On page 19 of Applicant’s 1/9/2026 Amendment and remarks, with respect to the rejection of claim 1 under 35 U.S.C. 103, Applicant argues that the SU reference does not teach the “the 1-level output quantum state being a state in which a 1-level sub-image quantum state, corresponding to each of a plurality of 1-level sub-images generated by applying Harr wavelet transformation to the original image, and a quantum state, corresponding to a label for the 1-level sub-image quantum state, are entangled with each other” limitation. Specifically, Applicant argues that SU does not teach the “entangled with each other” limitation. The examiner respectfully disagrees. Applicant’s response ignores the citation to p. 214523, section II.F of SU, which states that the “NEQR uses two entangled quantum sequences to store the grayscale information and position information of the image pixels.” Therefore, SU respectfully teaches entangling of at least two quantum sequences, and therefore the entangling of two specific quantum states. On page 19 of Applicant’s 1/9/2026 Amendment and remarks, with respect to the rejection of claim 1 under 35 U.S.C. 103, Applicant argues that the SU reference does not teach the “a state of last two Y-qubits among the plurality of Y-qubits” limitation. The examiner agrees that SU and the previous prior art of record does not teach this limitation. However, new grounds of rejection are provided herein using at least the ZHOU and MAZZOLA references, where such new grounds of rejection are necessitated by Applicant’s amendments to independent claim 1. On page 20 of Applicant’s 1/9/2026 Amendment and remarks, Applicant asserts that independent claim 10 and all claims depending from claims 1 and 10 should be allowed for the same reasons argued with respect to claim 1. The examiner respectfully disagrees for the reasons explained above with respect to claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 5-6, 10, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Wen-Wen, et al. "Quantum image watermarking algorithm based on Haar wavelet transform." IEEE Access 7 (2019): pp. 121303-121320, hereinafter referenced as HU, in view of Su, Jie, et al. "A new trend of quantum image representations." IEEE Access 8 (2020): pp. 214520-214537, hereinafter referenced as SU, and further in view of Yao, Xi-Wei, et al. "Quantum image processing and its application to edge detection: Theory and experiment." Physical Review X 7.3 (2017): 031041, hereinafter referenced as YAO, and further in view of US 20220103178 A1, hereinafter referenced as SZADY, and further in view of Sun, Bo, et al. "An RGB multi-channel representation for images on quantum computers." J. Adv. Comput. Intell. Intell. Inform 17.3 (2013), hereinafter referenced as SUN, and further in view of Zhou, Ri-Gui, and Ya-Juan Sun. "Quantum multidimensional color images similarity comparison." Quantum Information Processing 14.5 (2015): 1605-1624, hereinafter referenced as ZHOU, and further in view of US 20220188679 A1, hereinafter referenced as MAZZOLA. Regarding Claim 1 HU teaches: A method for converting an image using a quantum circuit, the method comprising: (HU, p. 121304, section I.A: “The scheme proposed in this paper falls into the category of quantum image watermarking methods.”; HU, p. 121304, section I.B: “In this work, the quantum Haar wavelet transforms (QHWT) is developed along with its quantum circuit implementation. Based on the introduced QHWT, a quantum image watermarking algorithm in the frequency domain is proposed for FRQI.”; Examiner’s Note: HU teaches quantum image watermarking, e.g., conversion of an image to a watermarked-image, using a quantum circuit implementation) generating an input quantum state corresponding to an original image, based on a pixel value of each pixel in the original image; (HU, pp. 121305-06, section III.B: “Flexible representation of quantum images (FRQI) proposed in [12] encodes the information of a 2n x 2n digital image in the following quantum state: ... The FRQI representation model integrates the information in an image within two variables: |i > = |Y > | X > denotes the pixel location information, .... represent the coordinate information in the horizontal and vertical directions, respectively; ... encodes the pixel color information in position |i >. Therefore, it is easy to deduce that (1 + 2n) qubits are needed to encode a 2n x 2n digital image in a quantum register based on the FRQI mode” Examiner’s Note: HU teaches that using the FRQI approach, an image is encoded in an original quantum state (corresponding to recited “input quantum state”)) transforming the input quantum state into a 1-level intermediate quantum state ... ; (HU, p. 121308, section IV.B: “For a two-dimensional digital image, there are two ways to transform an image based on HWT: standard decomposition and non-standard decomposition [40]. The standard decomposition method refers to transforming the image's pixel values in each row using a one-dimensional HWT. This is followed by transforming the columns of the row-transformed image by using a one-dimensional HWT again.”; HU, p. 121312, section V.B: “Step 1: Implement block 1st-level decomposition based on QHWT for the carrier image |C> to obtain the middle quantum state |CB1>”; Examiner’s Note: HU discloses decomposing the carrier image to a middle quantum state (corresponding to a recited “intermediate quantum state”), where such middle quantum state is decomposed using a 1st-level quantum haar wavelet transform, so the middle quantum state corresponds to a “1-level”) transforming the 1-level intermediate quantum state into a 1-level output quantum state, ... (HU, p. 121312, section V.B: “Step 3: The inverse block 1st-level transform of the quantum state |CB>” is used to obtain the watermarked image |CWB>”; Examiner’s Note: |CWB> corresponds to the recited “output quantum state”, which corresponds to a 1st-level transform (corresponding to recited “1-level”)) a plurality of 1-level sub-images generated by applying Harr wavelet transformation to the original image (HU, p. 121304, section I.B: “In this work, the quantum Haar wavelet transforms (QHWT) is developed along with its quantum circuit implementation. Based on the introduced QHWT, a quantum image watermarking algorithm in the frequency domain is proposed for FRQI. The QHWT can be applied to decompose a quantum image modeled by FRQI Using the multi-resolution analysis, the quantum carrier image can be decomposed into any desired level resulting in four subbands: approximate subband, horizontal subband, vertical subband, and diagonal subband.”; Examiner’s Note: HU teaches using a quantum Haar wavelet transform to decompose an image into 4 subbands (or 4 sub-images), approximate, horizonal, vertical, and diagonal, where as explained above, a transform can be a 1st-level transform (corresponding to recited “1-level” limitation); the examiner further notes that paras. 0051-0052 of the instant specification also disclose sub-images of vertical direction, horizontal direction, and a diagonal direction) However, HU fails to explicitly teach: by selectively applying a Y-axis rotation gate to two qubits among a plurality of qubits representing the input quantum state and not applying the Y-axis rotation gate to remaining qubits among the plurality of qubits by applying a swap gate to a plurality of qubits representing the 1-level intermediate quantum state the 1-level output quantum state being a state in which a 1-level sub-image quantum state, corresponding to each of ..., corresponding to a label for the 1-level sub-image quantum state, are entangled with each other, wherein the plurality of qubits representing the input quantum state include a plurality of X-qubits corresponding to X-axis coordinates of each pixel, and a plurality of Y-qubits corresponding to Y-axis coordinates of each pixel, and wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of last two Y-qubits among the plurality of Y-qubits. However, in a related field of endeavor (quantum image representation), SU teaches: transforming the input quantum state into a 1-level intermediate quantum state by ... applying a Y-axis rotation gate to two qubits among a plurality of qubits representing the input quantum state (SU, p. 214524, section II.H: “The Polynomial Preparation Theorem (PPT) shows that the MCQI can be constructed using the Hadamard and control rotation gates.”; SU, p. 214531, section II.U: “Then, we perform the next quantum transformation to assign the color values and coordinates to these sorted positions by the superposed 22n quantum states. Ry(2θ) and Ry(2φ) are rotation matrices (the rotations about Y axis by the angle 2θ and 2φ, respectively)”; Examiner’s Note: SU teaches rotation matrices to perform a quantum transformation for quantum states around a y-axis; the HU-SU combination now implements the y-axis rotation of SU, using the rotation gates of both HU (see HU, p. 121315, section III.E) and SU to at least two of the qubits representing pixels with respect to the initial quantum state as in HU) the 1-level output quantum state being a state in which a 1-level sub-image quantum state, corresponding to each of a plurality of 1-level sub-images generated by applying Harr wavelet transformation to the original image, and a quantum state, corresponding to a label for the 1-level sub-image quantum state, are entangled with each other. (SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale SU, p. 214523, section II.F: “The NEQR uses two entangled quantum sequences to store the grayscale information and position information of the image pixels”; Examiner’s Note: SU teaches, with respect to the FRQI representation (which is the same representation as HU) sub-images have labels (00, 01, 10, 11), and that the complete image is a sum of the sub-images as shown in the corresponding equation in Figure 1, and in a separate embodiment, also teaches a quantum image representation using entangled quantum sequences; the HU-SU combination now modifies the quantum image representation of HU to utilize sub-images with labels as in SU, in a manner that the representation entangles the quantum states associated with a first sub-image (such as the sub-image having label 00), and a quantum state associated with a label (such as the quantum state having label 11) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU as explained herein. As disclosed by SU, one of ordinary skill would have been motivated to do so because SU teaches that the quantum representation for an image can “improve processing efficiency.” (p. 214521, section II). One of ordinary skill would further be motivated to use the inherent properties of quantum entanglement because SU teaches that such quantum properties “makes quantum computing superior to classical computing in terms of information storage and parallel computation.” (p. 214521, section II). However, HU and SU fail to explicitly teach: by selectively by applying a swap gate to a plurality of qubits representing the 1-level intermediate quantum state wherein the plurality of qubits representing the input quantum state include a plurality of X-qubits corresponding to X-axis coordinates of each pixel, and a plurality of Y-qubits corresponding to Y-axis coordinates of each pixel, and wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of last two Y-qubits among the plurality of Y-qubits. However, in a related field of endeavor (quantum image processing), YAO teaches: transforming the 1-level intermediate quantum state into a 1-level output quantum state by applying a swap gate to a plurality of qubits representing the 1-level intermediate quantum state (YAO, p. 9, Appendix B: “Specifically, S4 is the SWAP gate to interchange the states of the two qubits. ... implemented by (m − k – 1) SWAP gates”; Examiner’s Note: YAO teaches using swap gates to perform a quantum wavelet transform, such as a quantum Haar wavelet transform; the HU-SU-YAO combination now modifies the quantum Haar wavelet transform of HU to utilize swap gates to effectuate such transform as disclosed by YAO) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, HU, and YAO as explained above. As disclosed by YAO, one of ordinary skill would have been motivated to do so because YAO teaches that the Haar wavelet transform is “basic” and “commonly used”, and therefore one of ordinary skill would be motivated to use such a basic and commonly used transform, utilizing the swap gate techniques of YAO, in order to perform quantum image processing. (p. 3, section II.B). One of ordinary skill would understand that a “basic” and “commonly used” transform has been peer-reviewed and tested and therefore one of ordinary skill would understand that such techniques could be implemented with a reasonable expectation of success. However, HU, SU, and YAO fail to explicitly teach: by selectively wherein the plurality of qubits representing the input quantum state include a plurality of X-qubits corresponding to X-axis coordinates of each pixel, and a plurality of Y-qubits corresponding to Y-axis coordinates of each pixel, and wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of last two Y-qubits among the plurality of Y-qubits. However, in a related field of endeavor (quantum computing techniques and circuits, see para. 0002), SZADY teaches and makes obvious: by selectively applying a Y-axis rotation gate to two qubits among a plurality of qubits representing the input quantum state and not applying the Y-axis rotation gate to remaining qubits among the plurality of qubits (SZADY, para. 0049: “The Adalus gate 112 includes a controlled-controlled rotation around the Z axis in the Bloch sphere that selectively applies, under control of the 3.sup.rd and 4th qubits from the top (comprising a first subset of the 5 qubits), a pi radian Z-axis Bloch sphere rotation to a bottom (target) qubit of the 5 qubits. A pair of controlled Hadamard (CCH) gates selectively conjugate the target qubit under control of a second subset of the 5 qubits that comprises the remaining 2 qubits.”; Examiner’s Note: the HU-SU-YAO-SZADY combination now modifies the quantum Haar wavelet transform of HU to selectively rotate (as taught by SZADY) qubits using a Y-axis rotation gate (as taught by SU), which means that at least some remaining qubits are not rotated as in SZADY) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, HU, YAO, and SZADY as explained above. One of ordinary skill would have been motivated to do so in order to have more granular control over whether certain gate operations are applied to certain qubits. However, HU, SU, YAO, and SZADY fail to explicitly teach: wherein the plurality of qubits representing the input quantum state include a plurality of X-qubits corresponding to X-axis coordinates of each pixel, and a plurality of Y-qubits corresponding to Y-axis coordinates of each pixel, and wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of last two Y-qubits among the plurality of Y-qubits. However in a related field of endeavor (representing images on quantum computers, see p. 404, section 1), SUN teaches and makes obvious: wherein the plurality of qubits representing the input quantum state include: a plurality of X-qubits corresponding to X-axis coordinates of each pixel; and a plurality of Y-qubits corresponding to Y-axis coordinates of each pixel. (SUN, p. 406, section 2.2: “As shown in Fig. 2, the first 3 qubits (c1, c2, and c3) are color qubits that encode RGB color information for an image and the remaining 2n qubits (yn−1, yn−2,..., y0 and xn−1, xn−2,..., x0) are used to encode position information (Y-Axis and X-Axis) about pixels of a 2n ×2n pixels image, as shown in Fig. 2.”; Examiner’s Note: SUN discloses encoding x-axis and y-axis information for an image using separate qubits; the HU-SU-YAO-SZADY-SUN combination now modifies the FRQI representation of HU (which specifically teaches pixel location information, see p. 121306, section III.B) with the teachings of SUN that have separate qubits for x-axis and y-axis coordinates for each pixel) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, and SUN as explained above. As disclosed by SUN, one of ordinary skill would have been motivated to do so because SUN teaches using the property of quantum parallelism to encode color and position information, in a manner that requires fewer qubits to encode pixels than other architectures, such as qubit lattice or grid qubit. (p. 405, section 1). However, HU, SU, YAO, SZADY, and SUN fail to explicitly teach: wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of last two Y-qubits among the plurality of Y-qubits. However, in a related field of endeavor (quantum multidimensional color images), ZHOU teaches and makes obvious: wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of (ZHOU, p. 1613, section 4.1.1: “we use 2 qubits to store the sub-partition-images information”; Examiner’s Note: the HU-SU-YAO-SZADY-SUN-ZHOU combination now modifies the FRQI representation of HU with the teachings of ZHOU such that 2 qubits (including Y-axis qubits as in SUN) are now used to store the entangled label of SU) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, and ZHOU as explained above. As disclosed by ZHOU, one of ordinary skill would have been motivated to do so because ZHOU teaches the benefit of using labels to link images so that “the similarity value for multidimensional color images can be computed.” (p. 1613, section 4). However, HU, SU, YAO, SZADY, SUN, and ZHOU fail to explicitly teach: last two Y-qubits among the plurality of Y-qubits. However, in a related field of endeavor (quantum resources, see para. 0001), MAZZOLA teaches and makes obvious: wherein the quantum state, corresponding to the label for the 1-level sub-image quantum state, is a state of last two Y-qubits among the plurality of Y-qubits. (MAZZOLA, para. 0076: “Here the final qubit acts as a label ...”; Examiner’s Note: the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA combination now modifies the FRQI representation of HU with the teachings of ZHOU such that 2 qubits (including Y-axis qubits as in SUN) are now used to store the entangled label of SU, where the final qubits are used for the label as taught by MAZZOLA) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA as explained above. As disclosed by MAZZOLA, one of ordinary skill would have been motivated to do so because MAZZOLA teaches the ability to distinguish between different quantum states using labels. (para. 0076). Regarding Claim 3 HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA disclose the method of claim 1 as explained above. HU further teaches: wherein the plurality of 1-level sub-images includes a low-frequency image, a horizontal direction high-frequency image, a vertical direction high-frequency image, and a diagonal direction high-frequency image for the original image; and (HU, p. 121304, section I.B: “In this work, the quantum Haar wavelet transforms (QHWT) is developed along with its quantum circuit implementation. ... The QHWT can be applied to decompose a quantum image modeled by FRQI Using the multi-resolution analysis, the quantum carrier image can be decomposed into any desired level resulting in four subbands: approximate subband, horizontal subband, vertical subband, and diagonal subband.”; HU, p. 121307, section III.C: “Based on the HWT, a signal can be decomposed into its low frequency and high frequency content” HU, p. 121310, section V.A: “Note that the approximation subband, having the low frequency content, coarsely describes the image and contains much of the energy of the original image. The higher frequency coefficients in the detailed subbands represent the fine details of the image and their energy is relatively small compared to the approximation subband.”; Examiner’s Note: HU teaches using a quantum Haar wavelet transform to decompose an image into 4 subbands (or 4 sub-images), approximate, horizonal, vertical, and diagonal, where as explained above, a transform can be a 1st-level transform (corresponding to recited “1-level” limitation), and where the approximate subband is low-frequency and the horizontal/vertical/diagonal subbands are high frequency) However, HU fails to explicitly teach: the label for the 1-level sub-image quantum state is a label for identifying whether the 1-level sub-image quantum state is a quantum state corresponding to which image among the low-frequency image, the horizontal direction high-frequency image, the vertical direction high-frequency image, and the diagonal direction high-frequency image. However, in a related field of endeavor (quantum image representation), SU teaches: the label for the 1-level sub-image quantum state is a label for identifying whether the 1-level sub-image quantum state is a quantum state corresponding to which image among the low-frequency image, the horizontal direction high-frequency image, the vertical direction high-frequency image, and the diagonal direction high-frequency image. (SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale Examiner’s Note: SU teaches assigning labels (00, 01, 10, 11) to sub-images; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA combination now modifies the approximate, vertical, horizontal, and diagonal subbands of HU so that they are given labels as in SU) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA as explained above. As disclosed by SU, one of ordinary skill would have been motivated to do so because SU teaches that the quantum representation for an image can “improve processing efficiency.” (p. 214521, section II). Regarding Claim 5 HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA disclose the method of claim 1 as explained above. However, HU, SU, YAO, and SZADY fail to explicitly teach: wherein, in the transforming of the input quantum state into the 1-level intermediate quantum state, a state of each of one of the plurality of X-qubits and one of the plurality of Y-qubits rotates by -π/2 about a Y-axis of a Bloch sphere. However in a related field of endeavor (representing images on quantum computers, see p. 404, section 1), SUN teaches: wherein, in the transforming of the input quantum state into the 1-level intermediate quantum state, a state of each of one of the plurality of X-qubits and one of the plurality of Y-qubits rotates by -π/2 about a Y-axis of a Bloch sphere. (SUN, p. 406, section 2.2: “As shown in Fig. 2, the first 3 qubits (c1, c2, and c3) are color qubits that encode RGB color information for an image and the remaining 2n qubits (yn−1, yn−2,..., y0 and xn−1, xn−2,..., x0) are used to encode position information (Y-Axis and X-Axis) about pixels of a 2n ×2n pixels image, as shown in Fig. 2.”; SUN, p. 407, section 2.3: “Note that, rotation matrices (rotations around the y-axis of a Bloch sphere by angle 2θ)”; Examiner’s Note: SUN discloses encoding x-axis and y-axis information for an image using separate qubits, and further teaches rotating around the y-axis of a Bloch sphere; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA combination now modifies the FRQI representation of HU (which specifically teaches pixel location information, see p. 121306, section III.B) with the teachings of SUN that have separate qubits for x-axis and y-axis coordinates for each pixel, and rotates about the axis of a Bloch sphere, where YAO (p. 5, section II.C) discloses “π/2 rotations” and one of ordinary skill would understand that a “- π/2 rotation” is merely a rotation in the opposite direction) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA as explained above. As disclosed by SUN, one of ordinary skill would have been motivated to do so because SUN teaches using the property of quantum parallelism to encode color and position information, in a manner that requires fewer qubits to encode pixels than other architectures, such as qubit lattice or grid qubit. (p. 405, section 1). Regarding Claim 6 HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA disclose the method of claim 5 as explained above. However, HU, SU, YAO, and SZADY fail to explicitly teach: wherein, in the transforming of the input quantum state into the 1-level intermediate quantum state, a state of each of a last X-qubit, among the plurality of X-qubits, and a last Y-qubit, among the plurality of Y-qubits, rotates by - π/2 about the Y-axis of the Bloch sphere. However in a related field of endeavor (representing images on quantum computers, see p. 404, section 1), SUN teaches: wherein, in the transforming of the input quantum state into the 1-level intermediate quantum state, a state of each of a last X-qubit, among the plurality of X-qubits, and a last Y-qubit, among the plurality of Y-qubits, rotates by - π/2 about the Y-axis of the Bloch sphere. (SUN, p. 406, section 2.2: “As shown in Fig. 2, the first 3 qubits (c1, c2, and c3) are color qubits that encode RGB color information for an image and the remaining 2n qubits (yn−1, yn−2,..., y0 and xn−1, xn−2,..., x0) are used to encode position information (Y-Axis and X-Axis) about pixels of a 2n ×2n pixels image, as shown in Fig. 2.”; SUN, p. 407, section 2.3: “Note that, rotation matrices (rotations around the y-axis of a Bloch sphere by angle 2θ)”; Examiner’s Note: SUN discloses encoding x-axis and y-axis information for an image using separate qubits, and further teaches rotating around the y-axis of a Bloch sphere; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA combination now modifies the FRQI representation of HU (which specifically teaches pixel location information, see p. 121306, section III.B) with the teachings of SUN that have separate qubits for x-axis and y-axis coordinates for each pixel, and rotates just the last X-Qubit and Y-Qubit about the axis of a Bloch sphere, where YAO (p. 5, section II.C) discloses “π/2 rotations” and one of ordinary skill would understand that a “- π/2 rotation” is merely a rotation in the opposite direction) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA as explained above. As disclosed by SUN, one of ordinary skill would have been motivated to do so because SUN teaches using the property of quantum parallelism to encode color and position information, in a manner that requires fewer qubits to encode pixels than other architectures, such as qubit lattice or grid qubit. (p. 405, section 1). Regarding Claim 10 HU teaches: An apparatus for converting an image using a quantum circuit, (HU, p. 121304, section I.A: “The scheme proposed in this paper falls into the category of quantum image watermarking methods.”; HU, p. 121304, section I.B: “In this work, the quantum Haar wavelet transforms (QHWT) is developed along with its quantum circuit implementation. Based on the introduced QHWT, a quantum image watermarking algorithm in the frequency domain is proposed for FRQI.”; Examiner’s Note: HU teaches quantum image watermarking, e.g., conversion of an image to a watermarked-image, using a quantum circuit implementation) the apparatus comprising at least one processor; and a computer-readable storage medium storing one or more programs including one or more computer-executable commands executed by the at least one processor, the one or more computer-executable commands implements operations for: (HU, p. 121304, section I.B: “Quantum watermark image embedding and extracting schemes, quantum measurement operation for FRQI images, and circuit complexity analysis are discussed in detail in Section V. Simulations on classical computer and experimental results as well as performance analyses are given in Section VI” Examiner’s Note: a classical computer has at least a processor and computer-readable storage medium (hard disk and/or RAM) for executing the software simulation for the quantum circuit and algorithms of HU) The remaining limitations in claim 10 correspond to the method of claim 1, and therefore claim 10 is rejected for the same reasons discussed above with respect to claim 1. Claim 14 depends from claim 13 and claims an apparatus that corresponds to the method of claim 5, and is therefore rejected for the same reasons explained above with respect to claims 5 and 13. Claim 15 depends from claim 14 and claims an apparatus that corresponds to the method of claim 6, and is therefore rejected for the same reasons explained above with respect to claims 6 and 14. Claims 2 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over HU, in view of SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA and further in view of Chalumuri, Avinash, et al. "Quantum-enhanced deep neural network architecture for image scene classification." Quantum Information Processing 20.11 (November 11, 2021), hereinafter referenced as CHALUMURI. Regarding Claim 2 HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA disclose the method of claim 1 as explained above. However, HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA fail to explicitly teach: wherein, the generating of the input quantum state comprises generating the input quantum state through amplitude encoding based on a plurality of qubits corresponding to coordinates of each pixel and the pixel value of each pixel. However, in a related field of endeavor (a hybrid quantum + classical computing architecture for satellite image scene classification, see page 3, section 1), CHALUMURI teaches: wherein, the generating of the input quantum state comprises generating the input quantum state through amplitude encoding based on a plurality of qubits corresponding to coordinates of each pixel and the pixel value of each pixel. (CHALUMURI, page 3, section 2: “Hence, parameters to be optimized reduce as the feature extraction is performed separately on a quantum computer. Additionally, exponential advantage can be obtained by amplitude encoding [33,34] classical image data on qubits. Amplitude encoding on a quantum computer uses only n qubits to encode 2n image pixel values. Also, quantum representations of images are considered to be unique as qubits process information in a high-dimensional Hilbert space.” CHALUMURI, page 6, section 5.2: “As a quantum computer uses qubits to process the data, images are encoded into qubits using amplitude embedding scheme [47]. The controlled rotations of Ry and Rz gates, along with CNOT (Controlled-NOT) gates, are used to encode the pixel data as amplitudes of the superposition of quantum states.”; Examiner’s Note: CHALUMURI discloses using amplitude encoding to encode pixel values; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-CHALUMURI combination now modifies the FRQI representation of HU (which specifically teaches pixel location information, see p. 121306, section III.B) with the amplitude encoding teachings of CHALUMIRI to use amplitude encoding with respect to pixel values and pixel location information (corresponding to recited “coordinates of each pixel”)) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, MAZZOLA, and CHALAMURI as explained above. As disclosed by CHALUMIRI, one of ordinary skill would have been motivated to do so because CHALUMIRI teaches that using amplitude encoding leads to “exponential advantage” being obtained, where 2n pixel values can be encoded using only n qubits. (p. 3, section 2). Claim 11 depends from claim 10 and claims an apparatus that corresponds to the method of claim 2, and is therefore rejected for the same reasons explained above with respect to claims 2 and 10. Claim 12 depends from claim 11 and claims an apparatus that corresponds to the method of claim 3, and is therefore rejected for the same reasons explained above with respect to claims 3 and 11. Claims 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over HU, in view of SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA, and further in view of Li, Hai-Sheng, et al. "The multi-level and multi-dimensional quantum wavelet packet transforms." Scientific reports 8.1 (2018): 13884, hereinafter referenced as LI. Regarding Claim 8 HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA disclose the method of claim 1 as explained above. However, , SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA fail to explicitly teach: transforming a k-level output quantum state, where k is a positive integer equal to or greater than 1, into a k+1 level intermediate quantum state by applying the Y-axis rotation gate to two qubits, among a plurality of qubits representing the k-level output quantum state; transforming the k+1 level intermediate quantum state into a k+1 level output quantum state by applying the swap gate to a plurality of qubits representing the k+1 level intermediate quantum state, the k+1 level output quantum state being a state in which a k+1 level sub-image quantum state, corresponding to each of a plurality of k+1 level sub-images, and a quantum state, corresponding to a label for the k+1 level sub-image quantum state, are entangled with each other. However, in a related field of endeavor (multi-level quantum wavelet packet transforms), LI teaches: transforming a k-level output quantum state, where k is a positive integer equal to or greater than 1, into a k+1 level intermediate quantum state by applying the Y-axis rotation gate to two qubits, among a plurality of qubits representing the k-level output quantum state; (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that HU and SU teach: HU, p. 121308, section IV.B: “For a two-dimensional digital image, there are two ways to transform an image based on HWT: standard decomposition and non-standard decomposition [40]. The standard decomposition method refers to transforming the image's pixel values in each row using a one-dimensional HWT. This is followed by transforming the columns of the row-transformed image by using a one-dimensional HWT again. SU, p. 214524, section II.H: “The Polynomial Preparation Theorem (PPT) shows that the MCQI can be constructed using the Hadamard and control rotation gates.”; SU, p. 214531, section II.U: “Then, we perform the next quantum transformation to assign the color values and coordinates to these sorted positions by the superposed 22n quantum states. Ry(2θ) and Ry(2φ) are rotation matrices (the rotations about Y axis by the angle 2θ and 2φ, respectively”; Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2); SU teaches rotation matrices to perform a quantum transformation for quantum states around a y-axis; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-LI combination now implements the y-axis rotation of SU, using the rotation gates of both HU (see HU, p. 121315, section III.E) and SU to at least two of the qubits representing pixels with respect to the initial quantum state as in HU, now with the 2D or 3D HQWPT transform of LI) transforming the k+1 level intermediate quantum state into a k+1 level output quantum state by applying the swap gate to a plurality of qubits representing the k+1 level intermediate quantum state, (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that HU and YAO teach: HU, p. 121312, section V.B: “Step 3: The inverse block 1st-level transform of the quantum state |CB>” is used to obtain the watermarked image |CWB>” YAO, p. 9, Appendix B: “Specifically, S4 is the SWAP gate to interchange the states of the two qubits. ... implemented by (m − k – 1) SWAP gates”; Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2); YAO teaches using swap gates to perform a quantum wavelet transform, such as a quantum Haar wavelet transform; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-LI combination now modifies the quantum Haar wavelet transform of LI to utilize swap gates to effectuate such transform as disclosed by YAO, now with the 2D or 3D HQWPT transform of LI) the k+1 level output quantum state being a state in which a k+1 level sub-image quantum state, corresponding to each of a plurality of k+1 level sub-images, and a quantum state, corresponding to a label for the k+1 level sub-image quantum state, are entangled with each other. (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that SU teaches: . (SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale SU, p. 214523, section II.F: “The NEQR uses two entangled quantum sequences to store the grayscale information and position information of the image pixels”; Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2); SU teaches, with respect to the FRQI representation (which is the same representation as HU) sub-images have labels (00, 01, 10, 11), and that the complete image is a sum of the sub-images as shown in the corresponding equation in Figure 1, and in a separate embodiment, also teaches a quantum image representation using entangled quantum sequences; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-LI combination now modifies the quantum image representation of HU to utilize sub-images with labels as in SU, in a manner that the representation entangles the quantum states associated with a first sub-image (such as the sub-image having label 00), and a quantum state associated with a label (such as the quantum state having label 11) as in SU, and now uses the 2D or 3D HQWPT transform of LI to effectuate the transformations) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, MAZZOLA, and LI as explained above. As disclosed by LI, one of ordinary skill would have been motivated to do so because multi-level transforms can now more easily be allied to 2D images and 3D video. (p. 22, Conclusion section). As further disclosed by LI, one of ordinary skill would have been motivated to do so in order to take advantage of the multi-level and multi-dimensional transforms of LI that can “exponentially speed up the computation of the wavelet transform in comparison to the one on a classical computer.” (p. 22, Conclusion section). Regarding Claim 9 HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA disclose the method of claim 1 as explained above. HU further teaches: measuring a state of a ... qubit representing a quantum state ... (HU, p. 121314, section V.E: “To determine the classical image information from the quantum registers, a measurement of the quantum state based on multi-projection operators is incorporated.”) However, HU fails to explicitly teach: determining whether the value of the label qubit is a preset value; transforming a k-level output quantum state into a k+1 level intermediate quantum state by applying the Y-axis rotation gate to two qubits, among a plurality of qubits representing the k-level output quantum state, when the measured value of the label qubit is the preset value; and transforming the k+1 level sub-image quantum state into a k+1 level output quantum state by applying the swap gate to a plurality of qubits representing the k+1 level intermediate state, the k+1 level output quantum state being a state in which a k+1 level sub-image quantum state, corresponding to a plurality of k+1 level sub-image, and a quantum state, corresponding to a label for the k+1 level sub-image quantum state, are entangled with each other. However, in a related field of endeavor (quantum image representation), SU teaches: measuring a state of a label qubit representing a quantum state ... (SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale Examiner’s Note: the HU-SU-YAO-LI combination now measures a quantum state for a qubit as in HU, where the qubit represents a label as in SU) determining whether the value of the label qubit is a preset value; (SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale Examiner’s Note: the HU-SU-YAO-LI combination now uses the preset values of labels as in SU (00, 01, 10, 11), where such labels are predefined However, HU, SU, YAO, SZADY, SUN, ZHOU, and MAZZOLA fail to explicitly teach: transforming a k-level output quantum state into a k+1 level intermediate quantum state by applying the Y-axis rotation gate to two qubits, among a plurality of qubits representing the k-level output quantum state, when the measured value of the label qubit is the preset value; and transforming the k+1 level sub-image quantum state into a k+1 level output quantum state by applying the swap gate to a plurality of qubits representing the k+1 level intermediate state, the k+1 level output quantum state being a state in which a k+1 level sub-image quantum state, corresponding to a plurality of k+1 level sub-image, and a quantum state, corresponding to a label for the k+1 level sub-image quantum state, are entangled with each other. However, in a related field of endeavor (multi-level quantum wavelet packet transforms), LI teaches: (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that HU and SU teach: HU, p. 121314, section V.E: “To determine the classical image information from the quantum registers, a measurement of the quantum state based on multi-projection operators is incorporated.” SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2), HU teaches measuring the quantum states of qubits, and SU teaches assigning labels to qubit states; the HU-SU-YAO-LI combination now measures the states of qubit labels as in HU and SU, where such measurement is done for a sub-image created using a 2D or 3D HQWPT transform of LI) transforming a k-level output quantum state into a k+1 level intermediate quantum state by applying the Y-axis rotation gate to two qubits, among a plurality of qubits representing the k-level output quantum state, when the measured value of the label qubit is the preset value; and (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that HU and SU teach: HU, p. 121308, section IV.B: “For a two-dimensional digital image, there are two ways to transform an image based on HWT: standard decomposition and non-standard decomposition [40]. The standard decomposition method refers to transforming the image's pixel values in each row using a one-dimensional HWT. This is followed by transforming the columns of the row-transformed image by using a one-dimensional HWT again. SU, p. 214524, section II.H: “The Polynomial Preparation Theorem (PPT) shows that the MCQI can be constructed using the Hadamard and control rotation gates.”; SU, p. 214531, section II.U: “Then, we perform the next quantum transformation to assign the color values and coordinates to these sorted positions by the superposed 22n quantum states. Ry(2θ) and Ry(2φ) are rotation matrices (the rotations about Y axis by the angle 2θ and 2φ, respectively”; Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2); SU teaches rotation matrices to perform a quantum transformation for quantum states around a y-axis; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-LI combination now implements the y-axis rotation of SU, using the rotation gates of both HU (see HU, p. 121315, section III.E) and SU to at least two of the qubits representing pixels with respect to the initial quantum state as in HU, now with the 2D or 3D HQWPT transform of LI, solely when the measured value matches the predefined label values of SU) transforming the k+1 level sub-image quantum state into a k+1 level output quantum state by applying the swap gate to a plurality of qubits representing the k+1 level intermediate state, (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that HU and YAO teach: HU, p. 121312, section V.B: “Step 3: The inverse block 1st-level transform of the quantum state |CB>” is used to obtain the watermarked image |CWB>” YAO, p. 9, Appendix B: “Specifically, S4 is the SWAP gate to interchange the states of the two qubits. ... implemented by (m − k – 1) SWAP gates”; Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2); YAO teaches using swap gates to perform a quantum wavelet transform, such as a quantum Haar wavelet transform; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-LI combination now modifies the quantum Haar wavelet transform of LI to utilize swap gates to effectuate such transform as disclosed by YAO, now with the 2D or 3D HQWPT transform of LI) the k+1 level output quantum state being a state in which a k+1 level sub-image quantum state, corresponding to a plurality of k+1 level sub-image, and a quantum state, corresponding to a label for the k+1 level sub-image quantum state, are entangled with each other. (LI, p. 2: “We present the multi-level and multi-dimensional QWPTs, including HQWPT [Haar quantum wavelet packet transform], IHQWPT, DQWPT and IDQPT for the first time, and prove the correctness by theoretical derivations and simulation experiments.” LI, p. 20, “Simulation experiments of the 2D HQWPT and DQWPT” and “Simulation experiments of the 3D HQWPT and DQWPT”; The examiner notes that SU teaches: . (SU, p. 214521, section II.A, Figure 1: PNG media_image2.png 154 478 media_image2.png Greyscale SU, p. 214523, section II.F: “The NEQR uses two entangled quantum sequences to store the grayscale information and position information of the image pixels”; Examiner’s Note: LI teaches 2D and 3D HQWPT transforms (corresponding to k = 1, k = 2); SU teaches, with respect to the FRQI representation (which is the same representation as HU) sub-images have labels (00, 01, 10, 11), and that the complete image is a sum of the sub-images as shown in the corresponding equation in Figure 1, and in a separate embodiment, also teaches a quantum image representation using entangled quantum sequences; the HU-SU-YAO-SZADY-SUN-ZHOU-MAZZOLA-LI combination now modifies the quantum image representation of HU to utilize sub-images with labels as in SU, in a manner that the representation entangles the quantum states associated with a first sub-image (such as the sub-image having label 00), and a quantum state associated with a label (such as the quantum state having label 11) as in SU, and now uses the 2D or 3D HQWPT transform of LI to effectuate the transformations) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the quantum image watermarking teachings of HU with the teachings of SU, YAO, SZADY, SUN, ZHOU, MAZZOLA, and LI as explained above. As disclosed by LI, one of ordinary skill would have been motivated to do so because multi-level transforms can now more easily be allied to 2D images and 3D video. (p. 22, Conclusion section). As further disclosed by LI, one of ordinary skill would have been motivated to do so in order to take advantage of the multi-level and multi-dimensional transforms of LI that can “exponentially speed up the computation of the wavelet transform in comparison to the one on a classical computer.” (p. 22, Conclusion section). Claim 17 depends from claim 10 and claims an apparatus that corresponds to the method of claim 8, and is therefore rejected for the same reasons explained above with respect to claims 8 and 10. Claim 18 depends from claim 10 and claims an apparatus that corresponds to the method of claim 9, and is therefore rejected for the same reasons explained above with respect to claims 9 and 10. Allowable Subject Matter Claim 19 is allowed. Claims 7 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of the allowance of claim 19 and the allowable subject matter of claims 7 and 16: Claim 19 is allowed because none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specified in claim 19, including at least: wherein the transforming of the 1-level intermediate quantum state into the 1-level output quantum state comprises: sequentially swapping states of adjacent X-qubits from the last X-qubit to a first X-qubit, among the plurality of X-qubits; sequentially swapping states of adjacent Y-qubits from the last Y-qubit to a first Y-qubit, among the plurality of Y-qubits; and swapping states of the first Y-qubit and the last X-qubit, and then sequentially swapping states of adjacent X-qubits from the last X-qubit to a second X-qubit, among the plurality of X-qubits. The closest prior art of record discloses: HU teaches a quantum watermarking technique that transforms an initial quantum state to a middle, then final quantum state, and uses a Haar quantum wavelet transform to create low frequency, high-frequency vertical, high-frequency horizontal, and high-frequency diagonal subbands. (HU, p. 121304, section I.B, p. 121307, section III.C, , p. 121310, section V.A). SU teaches using predefined labels (00, 10, 01, and 11) for sub-images, and using entangled quantum sequences for image representation. (SU, p. 214521, section II.A, Fig. 1 and p. 214523, section II.F). YAO teaches using swap gates to effectuate transforms. (YAO, p. 9, Appendix B). SUN discloses encoding x-axis and y-axis information for an image using separate qubits (p. 406, section 2.2). LI teaches multi-level and multi-dimensional (at least 2D and 3D) versions of the Haar quantum wavelet transform. (LI, pp. 2 and 20). US 20250131308 A1, hereinafter referenced as HEIMONEN, teaches sequentially swapping the state of each qubit. (para. 0043). However, the examiner has found that the distinct feature of the Applicant's claimed invention over the prior art is the explicit claiming of the aforementioned limitations in combination with all the other limitations as specified in claim 19. In particular, the particular order and sequence of the sequential swapping of X-qubits and Y-qubits (as such terms are recited in the claim) would not have been obvious to one of ordinary skill in the art without the hindsight bias of the present disclosure. Therefore, because the prior art of record does not anticipate nor make obvious the limitations recited in claim 19, claim 19 is allowed. Claim 7 would be considered allowable because none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specified in claim 7, including at least: wherein the transforming of the 1-level intermediate quantum state into the 1-level output quantum state comprises: sequentially swapping states of adjacent X-qubits from the last X-qubit to a first X-qubit, among the plurality of X-qubits; sequentially swapping states of adjacent Y-qubits from the last Y-qubit to a first Y-qubit, among the plurality of Y-qubits; and swapping states of the first Y-qubit and the last X-qubit, and then sequentially swapping states of adjacent X-qubits from the last X-qubit to a second X-qubit, among the plurality of X-qubits. The closest prior art of record discloses: HU teaches a quantum watermarking technique that transforms an initial quantum state to a middle, then final quantum state, and uses a Haar quantum wavelet transform to create low frequency, high-frequency vertical, high-frequency horizontal, and high-frequency diagonal subbands. (HU, p. 121304, section I.B, p. 121307, section III.C, , p. 121310, section V.A). SU teaches using predefined labels (00, 10, 01, and 11) for sub-images, and using entangled quantum sequences for image representation. (SU, p. 214521, section II.A, Fig. 1 and p. 214523, section II.F). YAO teaches using swap gates to effectuate transforms. (YAO, p. 9, Appendix B). SZADY teaches selectively applying rotation gates to qubits. (para. 0049). SUN discloses encoding x-axis and y-axis information for an image using separate qubits (p. 406, section 2.2). ZHOU teaches using two qubit to store image information. (p. 1613, section 4.1.1). MAZZOLA teaches that the final qubits can act as a label. (para. 0076). LI teaches multi-level and multi-dimensional (at least 2D and 3D) versions of the Haar quantum wavelet transform. (LI, pp. 2 and 20). US 20250131308 A1, hereinafter referenced as HEIMONEN, teaches sequentially swapping the state of each qubit. (para. 0043). However, the examiner has found that the distinct feature of the Applicant's claimed invention over the prior art is the explicit claiming of the aforementioned limitations in combination with all the other limitations as specified in claim 7. In particular, the particular order and sequence of the sequential swapping of X-qubits and Y-qubits (as such terms are recited in the claim) would not have been obvious to one of ordinary skill in the art without the hindsight bias of the present disclosure. Therefore, because the prior art of record does not anticipate nor make obvious the limitations recited in claim 7, claim 7 would be allowed if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 16 depends from claim 15, and claims an apparatus that corresponds to the method of claim 7, and would be allowable for the same reasons explained with respect to claim 7 if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C LEE whose telephone number is (571)272-4933. The examiner can normally be reached M-F 12:00 pm - 8:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL C. LEE/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Dec 30, 2022
Application Filed
Oct 10, 2025
Non-Final Rejection — §103
Jan 09, 2026
Response Filed
Feb 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603081
METHOD AND SERVER FOR A TEXT-TO-SPEECH PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602605
QUANTUM COMPUTER ARCHITECTURE BASED ON MULTI-QUBIT GATES
2y 5m to grant Granted Apr 14, 2026
Patent 12591915
METHODS AND SYSTEMS FOR DETERMINING RECOMMENDATIONS BASED ON REAL-TIME OPTIMIZATION OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12585743
INTERFACE ACCESS PROCESSING METHOD, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12568935
AI-BASED LIVESTOCK MANAGEMENT SYSTEM AND LIVESTOCK MANAGEMENT METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
86%
With Interview (+27.1%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month