DETAILED ACTION
Claim(s) 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US:
Claim(s) 2,3 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US, as applied in claims 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 above, in view of Liu et al. (US 2020/0304835 A1):
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US in view of YAO et al. (CN 106687989 A) with machine translation:
Claim(s) 14,15,16,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US in view of Laszlo et al. (US 2022/0391692 A1):
Claim(s) 23,26,27 and 28,29,30 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US, as applied in claims 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 above, in view of Qi et al. (US 2018/0157929 A1):
Claim(s) 24,25 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US, as applied in claims 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 above, in view of Qi et al. (US 2018/0157929 A1), as applied in claims 23,26,27 and 28,29,30 above, further in view of Liu et al. (US 2020/0304835 A1) as applied in claims 2,3:
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/20/2025 has been entered.
Claims 1-30 pending.
Claim Rejections - 35 USC § 101
Since the claims are amended, the claims are re-evaluated under 35 USC 101:
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18 and 19,20,21,22 and 23,24,25,26,27 and 28,29,30 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
PNG
media_image1.png
944
135
media_image1.png
Greyscale
Step Zero: establish broadest reasonable interpretation (footnotes);
Step 1: Claim 1 a machine, Claim 19 a process; Claim 23 a machine; Claim 28 a process;
Step 2A, prong 1:
The claim(s) recite(s) math (“values”; “values”1; “generate”; generate…values”23):
1. (Currently Amended) A device comprising:
one or more processors configured to:
obtain4 encoded data5 associated with one or more motion values;
obtain one or more first predicted motion values based on a first portion of the
encoded data;
generate , based on the one or more first predicted motion values , one or more
estimated values of one or more second input values used to generate67 a second portion of the encoded data8;
obtain conditional input of a compression network based on the one or more
estimated values of the one or more second input values
process, using the compression network, the encoded data and the conditional
input to generate one or more second predicted motion values9.
PNG
media_image2.png
1016
1129
media_image2.png
Greyscale
PNG
media_image3.png
935
1129
media_image3.png
Greyscale
Step 2A,prong 2:
This judicial exception is not integrated into a practical application because the additional elements (“processors” “encoded data” “encoded data” “a second portion” “conditional input of a compression network” “process, using the compression network, the encoded data and the conditional input” 10) do not improvement the technical field of compression as one of skill in the art would recognize in view of applicant’s disclosure [0001][0050]FIG.1:
PNG
media_image4.png
216
609
media_image4.png
Greyscale
PNG
media_image5.png
917
1024
media_image5.png
Greyscale
Step 2B:
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements (such as “processor”; “compression network”11 (wireless); and “input” (conditional)) considered individually or in combination with the abstract adhere12 to the conventional in view of applicant’s disclosure [0003]:
PNG
media_image6.png
1732
1129
media_image6.png
Greyscale
PNG
media_image7.png
1383
1129
media_image7.png
Greyscale
1. (SUGGESTED) A device comprising:
one or more processors configured to:
obtain13 encoded data14 THAT IS15 associated with one or more motion values;
obtain one or more first predicted motion values based on a first portion of the
encoded data;
generate , based on the one or more first predicted motion values , one or more
estimated values of one or more second input values used to generate1617 a second portion of the encoded data18;
obtain conditional input of a compression network based on the one or more
estimated values of the one or more second input values19
process, using the compression network, the encoded data and the conditional
input to generate one or more second predicted motion values20.
Response to Arguments
35 USC 102 and 103
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “based on encoded data”, applicant’s remarks, page 12, 1st para, 3rd S) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
In contrast claim 1, line 4 states (conditional input) “based on a first portion”.
Thus Esenlik teaches the conditional input (y-hat) includes21 (i.e., involves as a factor) an (encoder motion) estimate via Esenlik’s provisional 63/390,263 of:
Fig. 22 (encoder),
Fig. 23 (decoder),
Fig. 11 (entropy decoding):
PNG
media_image8.png
1028
1897
media_image8.png
Greyscale
Thus Esenlik discloses (as detailed in the below 35 USC 102 rejection of claim 1) via applicant’s remarks, page 12, 2nd paragraph:
"obtain[ing]22 one or more first predicted motion values based on a first portion of the encoded data; generat[ing]23 one or more estimated values based on the one or more first predicted motion values, wherein the one or more estimated values correspond to one or more second input values used to generate a second portion of the encoded data; obtain[inG]24 conditional input of a compression network based on the one or more estimated values for the one or more second input values," as in claim 1.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US:
PNG
media_image9.png
1085
383
media_image9.png
Greyscale
Re 1. (Currently Amended) Esenlik discloses via the Priority Data, A device comprising:
one or more (“video decoder 4500” [0007]) processors configured to:
PNG
media_image10.png
385
718
media_image10.png
Greyscale
obtain encoded data (“encoded into the bitstreams, i.e. some of the residuals are
skipped being encoded into the bitstreams.” [00172] 1st S: fig. 22: “Encoded bitstream”) associated with one or more (full) motion (“vector”25 [0017] last S) values;
PNG
media_image11.png
637
953
media_image11.png
Greyscale
obtain (in decoded form) one or more first (current) predicted motion values (or a
current predicted video residual26 in decoded form via “residual data for the current27 video block by subtracting the predicted video block(s) of the current video block” [0027] 1st S: fig. 22:4407) based on2829 a first portion (fig. 5: “bits1” with a “second bitstream (bits2)” [0074] 3rd S: fig. 5: “bits2”) of30 the encoded data;
PNG
media_image12.png
1216
977
media_image12.png
Greyscale
generate, based on3132 the one or more first predicted motion values, one or
more estimated values (via “estimated” “rate” [0054] 4th S & “estimated variance σ” [00161]: figs. 4,5,6: “Entropy Parameters”) of33 one or more second input values (via said fig. 5: “bits1” with a “second bitstream (bits2)”34 [0074] 3rd S: figs. 5,6: “bits2”) used35 to generate a second (“prediction block” [0041] 1st S: fig. 23:4502:“Motion Compensation Unit”) portion of36 the encoded data;
PNG
media_image13.png
1120
954
media_image13.png
Greyscale
obtain conditional input (y-hat [0200]: represented in fig. 1:circled y-hat or
in fig. 22: 4414: “Entropy Encoding Unit 4414 or fig. 6: “
y
^
”) of a (“video” [0092]) compression network based on3738 the one or more estimated values of39 the one or more second input values 4041
PNG
media_image14.png
474
932
media_image14.png
Greyscale
process (via feedback in fig. 22 or at a corresponding decoder of fig. 23),
using42 the compression network, the encoded data and the conditional input to generate one or more second (feedback/decoding) predicted motion values (via:
PNG
media_image15.png
1120
954
media_image15.png
Greyscale
PNG
media_image16.png
451
798
media_image16.png
Greyscale
PNG
media_image17.png
658
973
media_image17.png
Greyscale
PNG
media_image18.png
658
953
media_image18.png
Greyscale
PNG
media_image8.png
1028
1897
media_image8.png
Greyscale
PNG
media_image8.png
1028
1897
media_image8.png
Greyscale
Re 4. (Original) , The device of claim 1, wherein the one or more motion values represent one or more motion vectors (“for the block” [0017] last S) associated with one or more image units.
Re 5 (Original)., The device of claim 4, wherein an image unit of the one or more image units includes a coding (“entropy”) unit (“4414” [0013]).
Re 6. (Original), The device of claim 4, wherein an image unit of the one or more image units includes a block of (“sub-“ [0017] last S)pixels.
Re 7. (Original), The device of claim 4, wherein an image unit of the one or more image units includes a (previous reference) frame ([0094]) of (sub-)pixels.
Re 8. (Original), The device of claim 1, wherein the one or more second predicted motion values represent future (“inter-prediction” [0017] last S) motion vectors.
Re 9. (Original), The device of claim 1, wherein the one or more second predicted motion values correspond to a reconstructed (image) version (x-hat, [0052] penult S) of the one or more motion values.
Re 10. (Original), The device of claim 1, wherein the one or more processors are integrated in at least one of a headset, a mobile (“phone”, [0004] last S) communication device, an extended reality (XR) device, or a vehicle.
Re 12. (Original) , The device of claim 1, wherein the compression network includes a (“ANN” [0042]) neural network with multiple layers.
Re 13. (Original), The device of claim 1, wherein the compression network includes a video decoder, and wherein the video decoder has multiple (ANN) decoder layers configured to decode multiple orders of resolution of the encoded data associated with the one or more motion values.
Re 18. (Original), The device of claim 1, further comprising a (“modulator” [0009] 8th S) modem configured to receive a bitstream from an encoder device, wherein the bitstream includes the encoded data.
Re claim 19, claim 19 is rejected similar to claim 1:
19. (Currently Amended) A method comprising:
obtaining, at a device,43 encoded data associated with one or more motion
values;
obtaining, at the device, one or more first predicted motion values based on a
first portion of the encoded data;
generating, at the device based on the one or more first predicted motion values,
one or more estimated values of one or more second input values used to generate a second portion of the encoded data;
obtaining, at the device,44 conditional input of a compression network based on
the one or more estimated values of the one or more input values
processing, using the compression network,45 the encoded data and the conditional input to generate one or more second predicted motion values.
Re 20.,(Original) The method of claim 19, wherein processing the encoded data and the conditional (y-hat) input (in fig.1) includes:
processing the conditional input (y-hat) using the compression network to generate (“reconstructed” [00176] 1st S) feature data (represented as fig. 23:4501: “Entropy Decoding Unit”: detailed in fig. 3: “reconstruction”); and
processing the encoded (bitstream) data and the (reconstructed) feature data to generate the one or more second predicted motion values (via fig. 23:4502: “Motion Compensation Unit”:
PNG
media_image16.png
451
798
media_image16.png
Greyscale
PNG
media_image18.png
658
953
media_image18.png
Greyscale
Re 21., (Original) The method of claim 20, wherein the (reconstruction) feature data corresponds to multi-scale (“MSSIM”: MS-SSIM (MultiScale-Structural SIMilarity): in equation of [00199]) feature data having different spatial (“sub-pixel…or integer pixel” [0017] last S) resolutions.
Re 22. (Original), The method of claim 20, wherein the feature data includes multi-scale wavelet (“based” [0088] 2nd S) transform data (via:
PNG
media_image19.png
462
931
media_image19.png
Greyscale
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2,3 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US, as applied in claims 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 above, in view of Liu et al. (US 2020/0304835 A1):
PNG
media_image20.png
1085
418
media_image20.png
Greyscale
Re 2., Esenlik teaches claim 2 of The device of claim 1, wherein the one or more motion (estimated) values are based on output of one or more sensors.
Esenlik does not teach the difference46 of claim 2:
“one or more sensors”.
Re 2., Liu teaches the difference of claim 2 of The device of claim 1, wherein the one or more (full “velocity”47 [0059] 4th S) motion values are based on output of one or more sensors (“to obtain”, [0059] 2nd S, said full motion values).
Since Esenlik suggests using various disclosed techniques [0002], such as a bitrate reference, one of skill in the art of bitrates can make Esenlik’s be as Liu’s predictably recognizing the change being “directed to improved systems and methods for image compression”, Liu [0027] 1st S.
Re 3., the combination of Esenlik with Liu teaches claim 3 of The device of claim 2, wherein the one or more (full-velocity) sensors include an inertial measurement unit (IMU) (Liu: [0054] penult S).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US in view of YAO et al. (CN 106687989 A) with machine translation:
PNG
media_image21.png
1085
418
media_image21.png
Greyscale
Re 11., Esenlik teaches The device of claim 1, wherein the one or more motion (estimation) values indicate one or more of linear velocity, linear acceleration, linear position, angular velocity, angular acceleration, or angular position.
Esenlik does not teach the difference of claim 11:
“one or more of linear velocity, linear acceleration, linear position, angular velocity, angular acceleration, or angular position”.
Yao teaches the difference of claim 11:
Re 11., The device of claim 1, wherein the one or more motion (“information”) values (“relevant to evaluation”) indicate (“as a solution”) one or more of linear velocity, linear acceleration, linear position (or “linear recursive type geometrical relation”-“mark position”), angular velocity, angular acceleration, or angular position (via machine translation, pg. 4, last txt blk:
more specifically, in the face detection and facial expression recognition system, to define a facial shape by fixing set of the mark, wherein each single mark relates to the semantics of part face on important data point, face, such as eyes, mouth, nose, jaw line, chin, etc. the data mark position may include relevant to evaluation regardless of what data includes geometry (or a pixel) coordinates, brightness, color and/or motion information, which indicates one or more faces in the video sequence changed by the expression to expression (or from frame to frame). change in facial expression usually causing a perceptible physical geometry characteristic of the face mark around the main face is in or very small or fine change. More importantly, it has been found that such a geometrical change of facial feature of one specific facial marker is at least closely relates to cooperatively define the same face and/or one or more portions to face the vicinity of the neighboring facial marker. In some cases, all or nearly all of the sign face may relate to single sign on for a particular facial expression on the geometry. In this context, the term "relates to" means that can position a certain distance (or range) position of the mark with respect to the other mark for a particular facial expression. Therefore, dense geometric characteristic relation between corresponding subset of extracting multiple or each facial marker and comprises near neighbour mark, can describe the facial expression change is more precise and efficient manner, thereby avoiding complex actions of the unit detection technology. Therefore, in this process, the data of each one main mark position can be mark of those packet or subset but not to represent a single mark. Further, the concept can be extended forming aiming at the main mark formed every one subset comprises all limited surface part (excluding the main mark itself) or most other designations.
As a solution, by using main face flag and sub set of mark position data to formulate and solve the problem of linear recursive type geometrical relation is formed among the main mark and its subset. linear recursion can be used to capture one of the following geometric features between dense and distinctiveness of the relationship: (1) at least one or more facial marker (called each time a subset of the mark distributed to the main or anchoring mark main mark), and optionally on the face of each of the mark could be other mark main mark, and (2) forming a main mark of the mark of the subset, such as near neighbour mark, or all other mark on the same face. In other words, process or system uses each facial marker and geometrical characteristic of the corresponding subset with other mark between the linear 1 K single geometrical relation value. Therefore, for consistency and clarity, the primary concept of the main mark by performing such as linear recursive cooperatively forming a linear combination of the subset will be referred to as geometric relationship in the text, and descriptor-single relation vector with each individual mark in the subset of the known geometric relation value.).
Since Esenlik teaches communication [0002], one of skill in communication can make Esenlik’s be as Yao’s predictably recognizing the change resulting in “proper… communication”, Yao, pg. 1, last txt blk.
Claim(s) 14,15,16,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US in view of Laszlo et al. (US 2022/0391692 A1):
PNG
media_image22.png
1085
499
media_image22.png
Greyscale
Re 14., Esenlik teaches The device of claim 1, wherein the one or more processors are configured to track an object associated with the one or more motion (estimate) values across one or more frames of (“coded” [0048]) pixels.
Esenlik does not teach the difference of claim 14:
“to track an object”
Laszlo teaches the difference of claim 14:
Re 14., The device of claim 1, wherein the one or more processors are configured to track an object (via “tracking prediction 206” [0085]: fig. 2A) associated with the one or more motion values (“of the motion prediction network parameters” [0104], 3rd S) across one or more (“respective” [0086]) frames of pixels (“each” [0087] last S).
Since Esenlik suggests other motion predictions by providing examples of “inter prediction”, pg. 35: [0017] 2nd S via fig. 22:4403: “Mode Selection Unit”, one of skill in the art of motion prediction can make Esenlik’s be as Laszlo’s predictably recognizing the change generating “accurate predictions” of motion, Laszlo [0103] 1st S:
PNG
media_image23.png
1929
946
media_image23.png
Greyscale
PNG
media_image24.png
1560
1150
media_image24.png
Greyscale
Re 15., Esenlik does not disclose claim 15 however the combination of Esenlik with Laszlo teaches claim 15 of The device of claim 14, wherein the one or more second predicted motion (estimated) values represent a collision avoidance (as shown in Laszlo’s fig. 3C) output associated with a vehicle.
Re 16., Esenlik does not disclose claim 16; however, the combination of Esenlik and Laszlo teaches claim 16 of The device of claim 15, wherein the collision avoidance output indicates a predicted future position (Laszlo: fig. 3C:352: “course correction”, Laszlo [0113] last S) of the vehicle relative to the (person) object.
Re 17., Esenlik does not disclose claim 17; however, the combination of Esenlik and Laszlo teaches claim 17 of The device of claim 15, wherein the collision avoidance (correction) output indicates a predicted (vectorized) future (“0,0”, Esenlik [00152] 2nd S) position of the vehicle and a predicted future position (“0,0” [00152] 2nd S) of the object (“of interest”, Laszlo [0091]).
Claim(s) 23,26,27 and 28,29,30 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US, as applied in claims 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 above, in view of Qi et al. (US 2018/0157929 A1):
PNG
media_image25.png
1085
499
media_image25.png
Greyscale
Claim 23 is rejected like claims 1,19:
Re 23. (Currently Amended), Esenlik teaches A device comprising:
one or more processors configured to:
generate a first portion of encoded data based on a first particular motion
value;
generate, based on one or more first predicted motion values based on
the first portion of the data, one or more estimated values, wherein the one or more estimated values correspond to one or more motion values;
obtain conditional (y-hat) input of a compression network based on the
one or more estimated values
process48 (via an “arithmetic encoder” [0063] 3rd S: fig. 3: “AE”),
using49 the compression network,50 the conditional (y-hat) input and one or more motion values (via fig. 22:4404: “Motion Estimation Unit”) distinct from the conditional input (y-hat, twice) and the first particular motion value to generate (arithmetically) a second portion of the encoded data (y-hat upon output of fig. 3 “AD”) associated with the one or more motion values
PNG
media_image26.png
541
1026
media_image26.png
Greyscale
PNG
media_image27.png
1116
1955
media_image27.png
Greyscale
Esenlik does not teach the difference51 of claim 23 of--(first)52 particular (motion value)--.
Qi teaches the difference of claim 23:
(first)53 particular (motion value) (“(e.g., a minimum motion value) that indicates that motion is detected” [0080] 4th S: fig. 3: “Motion Threshold 304”).
Since Esenlik teaches, in the context of latency [0094][0095] as faced by applicants, dealing with low-latency, one of skill in the art of latency can make Esenlik’s be as Qi’s seeing the in change “Computational resources are conserved by generally reserving the second process for use on image-blocks that cannot be reliably processed using only the first process, such as image-blocks that fail to satisfy particular processing criteria.”, Qi [0017] 7th S, and “also reduce latency” Qi [0021] last S:
PNG
media_image28.png
1400
1141
media_image28.png
Greyscale
Re 26. (Original), The device of claim 23, wherein the one or more motion (estimation) values represent one or more motion (blok) vectors associated with one or more (sub- or integer) image (pixel) units.
Re 27. (Original), The device of claim 23, further comprising a (“modulator/demodulator” [0269]) modem configured to transmit a bitstream to a decoder device, wherein the bitstream includes the encoded data.
Claim 28, claim 28 is rejected similar to claim 23:
Re 28. (Currently Amended), Esenlik of the combination of Esenlik,Qi teaches A method (to reduce latency via blocks at a decoder) comprising:
generating, at a device, a first portion of encoded data based on a first particular
motion value;
generating, at the device based on one or more first predicted motion values
based on the first portion of the data, one or more estimated values, wherein the one or more estimated values correspond to one or more motion values;
obtaining, at [[a]] the device, conditional input of a compression network based
on the one or more estimated values
processing, using the compression network, the conditional input (y-hat, twice)
and one or more (encoder) motion values distinct from the conditional input (y-hat, twice) and the first particular motion value to generate a second portion of the encoded data (y-hat upon output of fig. 3 “AD”) associated with the one or more motion values
PNG
media_image26.png
541
1026
media_image26.png
Greyscale
PNG
media_image27.png
1116
1955
media_image27.png
Greyscale
PNG
media_image29.png
1778
1014
media_image29.png
Greyscale
PNG
media_image28.png
1400
1141
media_image28.png
Greyscale
Re 29. (Original), The method of claim 28, further comprising:
processing, at the device, the conditional (y-hat) input using the compression network to generate (reconstruction) feature data (as “quantized” [00176]: fig. 3: Q”); and
processing (arithmetically), using the compression network, the one or more motion (estimated) values and the (reconstruction) feature (Q) data to generate the encoded data (y-hat upon output of fig. 3 “AD”).
Re 30. (Original), The method of claim 29, wherein the (latent) feature (Q) data includes multi-scale (MS-SSIM) feature data having different (sub- or integer) spatial (pixel) resolutions.
Claim(s) 24,25 is/are rejected under 35 U.S.C. 103 as being unpatentable over ESENLIK et al. (WO 2024/020053 A1) with Priority Data: 63/390,263 18 July 2022 (18.07.2022) US, as applied in claims 1,4,5,6,7,8,9,10,12,13,18 and 19,20,21,22 above, in view of Qi et al. (US 2018/0157929 A1), as applied in claims 23,26,27 and 28,29,30 above, further in view of Liu et al. (US 2020/0304835 A1) as applied in claims 2,3:
PNG
media_image30.png
1085
499
media_image30.png
Greyscale
Claim 24 is rejected similar to claim 2:
24. The device of claim 23, wherein the one or more motion values are based on output of one or more sensors.
Claim 25 is rejected similar to claim 3:
25. The device of claim 24, wherein the one or more sensors include an inertial measurement unit (IMU).
Conclusion
The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
Thirumalai et al. (Correlation estimation from compressed images)
Thirumalai teaches motion value cost/energy functions E(M): (16)(17)(18)(19), pg. 652, rcol, last para, 1st S and:
“Next, the data function measures the consistency of a particular motion value for pixel z with the vectors Y1 and Y2.”
as the closest to the claimed “a first particular motion value” of claim 23.
JIANG et al. (CN 112585976 A) with SEARCH machine translation
JIANG teaches, pg. 38:
“The predefined motion information may include motion vector information having a particular motion value in the x-direction and y-direction, such as, (0, 0), (0, 1), (0, 1), (1, 0), (-1, 0), and the like.”
as the closest to the claimed “a first particular motion value” of claim 23.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS ROSARIO/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
1 value: Mathematics.
A. magnitude; quantity; number represented by a figure, symbol, or the like.
the value of an angle;
the value of x;
the value of a sum.
B. a point in the range of a function; a point in the range corresponding to a given point in the domain of a function.
The value of x2 at 2 is 4. (Dictionary.com)
2 generate Mathematics.to trace (a figure) by the motion of a point, straight line, or curve, wherein line is defined: Mathematics. a continuous extent of length, straight or curved, without breadth or thickness; the trace of a moving point, wherein trace is defined: Mathematics.
a. the intersection of two planes, or of a plane and a surface.
b. the sum of the elements along the principal diagonal of a square matrix.
c. the geometric locus of an equation. (Dictionary.com: AMERICAN)
3 generate: geometry to trace or form by moving a point, line, or plane in a specific way
circular motion of a line generates a cylinder, wherein line is defined: maths
A. any straight one-dimensional geometrical element whose identity is determined by two points. A line segment lies between any two points on a line
B. a set of points ( x, y ) that satisfies the equation y = mx + c, where m is the gradient and c is the intercept with the y -axis (Dictionary.com: BRITISH)
4 “obtain” is plural verb
5 “data” written in the plural sense: (used with a plural verb: “obtain”) individual facts, statistics, or items of information. These data represent the results of our analyses. (Dictionary.com)
6 “generate” is a plural verb
7 “generate” maps back (one-way ticket only: no going back to claim 1: i.e., not reading limitations from applicant’s disclosure back into claim 1) to applicant’s disclosure [0050] last S (reproduced below): “Generating”. This last sentence is a singular verb “is” (of “is available”). Claim 1 does not reflect this singular verb (“is”) and thus does not reflect the singular sense of data (applicant’s fig. 1: “Encoded Data 165”) and thus does not reflect the improvement as one of skill in the art would recognize the improvment in applicant’s disclosure:
1. (usually used with a singular verb) information in digital format, as encoded text or numbers, or multimedia images, audio, or video.
2. (used with a singular verb) a body of facts; information.
Additional data is available from the president of the firm. (Dictionary.com)
8 “data” in the plural verb (“generate”) context: (used with a plural verb) individual facts, statistics, or items of information. These data represent the results of our analyses. (Dictionary.com)
9 This last limitation maps (again, one-way ticket and no returning back) to applicant’s disclosure [0080]’s last sentence’s (reproduced below) use of a singular verb “reduces”: “generating the encoded data 165B…reduces the information”
10 This last limitation maps (again, one-way ticket and no returning back) to applicant’s disclosure [0080]’s last sentence’s use of a singular verb “reduces”: “generating the encoded data 165B…reduces the information”: similarly as discussed above in applicant’s paragraph [0050], claim 1 does not reflect this singular aspect of [0080].
11 network: Telecommunications, Computers. a system containing any combination of computers, computer terminals, printers, audio or visual display devices, or telephones interconnected by telecommunication equipment or cables: used to transmit or receive information. (Dictionary.com)
12 plural verb referring back to elements
13 “obtain” is plural verb referring to “processors”
14 “data” written in the plural sense: (used with a plural verb: “obtain”) individual facts, statistics, or items of information. These data represent the results of our analyses. (Dictionary.com)
15 IS (of THAT IS) refers back to singular “encoded data” consistent with applicant’s singular-context in [0050][0080].
16 “generate” is a plural verb
17 “generate” maps back (one-way ticket only: no going back to claim 1: i.e., not reading limitations from applicant’s disclosure back into claim 1) to applicant’s disclosure [0050] last S (reproduced below): “Generating”. This last sentence is a singular verb “is” (of “is available”). Claim 1 does not reflect this singular verb (“is”) and thus does not reflect the singular sense of data (applicant’s fig. 1: “Encoded Data 165”) and thus does not reflect the improvement as one of skill in the art would recognize the improvment in applicant’s disclosure:
1. (usually used with a singular verb: THAT IS) information in digital format, as encoded text or numbers, or multimedia images, audio, or video.
2. (used with a singular verb: IS) a body of facts; information.
Additional data is available from the president of the firm. (Dictionary.com)
18 “data” in the plural verb (“generate”) context: (used with a plural verb) individual facts, statistics, or items of information. These data represent the results of our analyses. (Dictionary.com)
19 singular verb
20 This last limitation maps (again, one-way ticket and no returning back) to applicant’s disclosure [0080]’s last sentence’s (reproduced below) use of a singular verb “reduces”: “generating the encoded data 165B…reduces the information”
21 include: to contain as a subordinate element; involve as a factor: Schooling should include friendship, fun, and laughter, in addition to rigorous study. (Dictionary.com)
22 [ing] is not claimed in claim 1 and “-ing” (of the first obtaining) also changes the scope (plural or singular sense) of the claimed “data”
23 [ing] is not claimed in claim 1 and “-ing” (of generating) changes the scope (plural or singular sense) of the claimed “data”
24 [ing] is not claimed in claim 1 and this last particular -inG (of the last obtaininG) does not change the scope (plural or singular sense) of the claimed “data”
25 vector: Mathematics. a quantity possessing both magnitude and direction, represented by an arrow the direction of which indicates the direction of the quantity and the length of which is proportional to the magnitude, wherein quantity is defined: Mathematics. A) the property of magnitude involving comparability with other magnitudes. B) something having magnitude, or size, extent, amount, or the like, wherein amount is defined: the full effect, value, or significance. (Dictionary.com)
26 residual: a residual quantity; remainder, wherein quantity is defined: a particular or indefinite amount of anything, wherein amount is defined: the full effect, value, or significance. (Dictionary.com)
27 current: new; present; most recent, wherein new is defined: of a kind now existing or appearing for the first time; novel.
28 on: in connection, association, or cooperation with (as shown in figures 22 & 5); as a part or element of. (Dictionary.com)
29 I see the phrase(s) “based on” as broad.
30 of: (used to indicate possession, connection, or association). (Dictionary.com)
31 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com)
32 I see the phrase(s) “based on” as broad.
33 of: (used to indicate possession, connection, or association). (Dictionary.com)
34 bit: Computers. Also called binary digit. a single, basic unit of digital information that is represented by one of two values, such as 1 or 0, True or False, or Yes or No. (Dictionary.com)
35 CLAIM SCOPE: “used” modifies the claimed “estimated values” or “ input values”
36 of: (used to indicate possession, connection, or association). (Dictionary.com)
37 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com)
38 I see the phrase(s) “based on” as broad.
39 of: (used to indicate possession, connection, or association). (Dictionary.com)
40 is: look at a figure
41 i.e., involve as a factor
42 “using” is present participle contributing to the action of “process”
43 NLP
44 NLP
45 NLP
46 THE CLAIMED INVENTION AS A WHOLE: I see nothing unexpected in applicant’s disclosure of the claimed “sensors”: leans to being obvious, wherein unexpected is defined: not expected; unforeseen; surprising (to me). (Dictionary.com). In contrast see rejection claim 23.
47 velocity: velocity is a vector quantity, wherein quantity is defined: Mathematics. A) the property of magnitude involving comparability with other magnitudes. B) something having magnitude, or size, extent, amount, or the like. C) magnitude, size, volume, area, or length, wherein amount is defined: the full effect, value, or significance (Dictionary.com).
48 “process” a verb
49 “using” a present participle contributing to the action of said verb “process” and further modifying the nouns “one or more processors”/ “device”.
50 This comma phrase is not a NLP (Non-Limiting-Phrase) since this comma phrase gives sequential order to the elements of claim 23
51 THE CLAIMED INVENTION AS A WHOLE: regarding the claimed prepositional modifier “particular”:
The problem faced by applicants is multi(1)(2)-faceted:
(1) injury, decay, waste, or loss of (comprised by conserve: Dictionary.com) resources such as memory and bandwidth; &
(2) “input” 105C “wait”- “165B” “latency” (fig. 1:105C,165B: “Input Value(s) 105”, “Encoded Data 165”):
[0048]Computing devices often incorporate functionality to process large amounts of data. Compressing the data prior to storage or transmission can conserve resources such as memory and bandwidth. For example, a computing device can generate an encoded version of an image frame that uses fewer bits than the original image frame. Techniques that reduce the size of the compressed data can further conserve resources. The compressed data can be processed to generate predicted data. For example, the predicted data can correspond to a reconstructed version of the image frame, a predicted future image frame in a sequence of images that includes the image frame, a classification of the image frame, other types of data associated with the image frame, or a combination thereof.
[0140]The conditional input generator 162 of FIG. 1 generates the conditional input 167B corresponding to the input value 105B independently of (e.g., prior to obtaining) subsequent input values, including the input value 105C, of the one or more input values 105. For example, the conditional input generator 162 uses the estimator 170 to generate the estimated value 171B based on the predicted value 169A, one or more additional predicted values corresponding one or more input values prior to the input value 105B in the one or more input values 105, or a combination thereof. The conditional input 167B includes the estimated value 171B, the predicted value 169A, the one or more additional predicted values, or a combination thereof. The feature generator 164 generates the feature data 163B based on the conditional input 167B, and the encoder 166 processes the input value 105B based on the feature data 163B to generate the encoded data 165B, as described with reference to FIG. 4. The encoder portion 160 can thus generate the encoded data 165B independently of (e.g., prior to) obtaining the input value 105C. A technical advantage of generating the encoded data 165B independently of the input value 105C can include reduced latency associated with generating the encoded data 165B without having to wait for access to the input value 105C.
Applicant’s solution includes the claimed “particular motion value” (fig. 20: “Estimated MV 2093A”) to reduce the input-wait-latency problem in:
[0198]In a particular example, the decoder portion 180 uses the estimated motion value 2093A (m^b→c) and the estimated motion value 2093B (m^a→b) corresponding to motion values (e.g., motion vectors) between predicted image units that are prior to the image unit 1407D as the conditional input 187D, and generates the predicted motion value 2095A (m^c→d) and the weight 2065 (∝) that can be used to generate a predicted image unit (x^d) associated with the image unit 1407D (xd), as further described with reference to FIGS. 21-23. A technical advantage of generating the predicted motion value 2095A (m^c→d) independently of the encoded data associated with any image units subsequent to the image unit 1407D can include reduced latency associated with generating the predicted motion value 2095A (m^c→d).
These very specific details (I did not see this coming) in [0198] (in the context of applicant’s amended FIG. 19, filed 12/10/2025) are not in claim 23. Thus this absence of the reduced-latency-solution is an indication of obviousness.
52 (italics) represent claim limitations already taught
53 (italics) represent claim limitations already taught