Prosecution Insights
Last updated: April 19, 2026
Application No. 18/361,982

ELIMINATING NON-IMPORTANT MOTION IN IMAGE SEQUENCES

Non-Final OA §103§112
Filed
Jul 31, 2023
Examiner
CADEAU, WEDNEL
Art Unit
2632
Tech Center
2600 — Communications
Assignee
Varjo Technologies OY
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
381 granted / 532 resolved
+9.6% vs TC avg
Strong +20% interview lift
Without
With
+19.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
574
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
75.6%
+35.6% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Prior arts cited in this office action: Fitzgerald et al. (US 20200280739 A1, hereinafter “Fitzgerald”) Strandborg et al. (US 20210248941 A1, hereinafter “Strandborg”) Lakshminaraynan (US 20240212192 A1, hereinafter “Lakshminaraynan”) Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 5 and 12 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation " encoding the original values of the pixels " in claim 1. There is insufficient antecedent basis for this limitation in the claim. Claim 12 recites similar limitation and is rejected on the same ground as claim 5 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fitzgerald et al. (US 20200280739 A1, hereinafter “Fitzgerald”) in view of Strandborg et al. (US 20210248941 A1, hereinafter “Strandborg”). Regarding claims 1, 8 and 15: Fitzgerald teaches a computer-implemented method (Fitzgerald [0001], [0073], [0094], where Fitzgerald discloses a computer implemented method and apparatus) comprising: identifying a gaze location in a given image, based on a given gaze direction (Fitzgerald [0028], where Fitzgerald teaches In one embodiment the method includes: determining a user gaze including at least one of: a direction of gaze; and, a depth of gaze; using the user gaze to at least one of: compress image data; and, decompress the compressed image data); dividing the given image into a plurality of areas (Fitzgerald [0132], [0163], [00170], [0212], where Fitzgerald discloses the image is divided into a plurality of regions or areas of interest having arbitrary shape); for a given area of the given image, identifying a corresponding area in at least one previous image (Fitzgerald [0026], [0104], [0130], claim 15, where Fitzgerald teaches In one embodiment the method includes: determining a change in display device pose from display of a previous image using at least one of: movement data; and, pose data and previous pose data; and, using the change in display device pose to at least one of: compress image data; and, decompress the compressed image data); determining an extent of change between the corresponding area of the at least one previous image and the given area of the given image (Fitzgerald [0026], [0104], [0130]- [0132] claim 15, where Fitzgerald teaches the appearance of individual objects within an image may be unchanged between successive images, with only the position varying based on movement of the display device. Accordingly, in this example, it is possible to simply replace portions of an image with part of previous image. The display device can then retrieve image data from the previous image and substitute this into the current image, vastly reducing the amount of image data that needs to be transferred without resulting in any loss in information; and encoding the given image into encoded image data, wherein when the importance factor for the given area is smaller than a first predefined threshold, the step of encoding comprises re-using previous encoded data of the corresponding area, instead of encoding the given area of the given image into the encoded image data image (Fitzgerald [0026], [0104], [0130]- [0132] claim 15, where Fitzgerald teaches The display device can then retrieve image data from the previous image and substitute this into the current image, vastly reducing the amount of image data that needs to be transferred without resulting in any loss in information. This could be calculated based on the display device movement, and/or could be achieved through code substitution, for example by replacing part of an image with a reference to part of a previous image, with the reference being transmitted as part of the image data. The reference could be of any appropriate form, but in one example is a code or similar that refers to a region within the earlier image. This could include a specific region of pixels, such as one or more pixel arrays, or could refer to a region defined by a boundary, as will become apparent from the remaining description). Fitzgerald fails to teaches explicitly calculating an importance factor for the given area of the given image, based on the determined extent of change and a distance of the given area from the gaze location. However, Fitzgerald teaches Thus, this provides a mechanism for compressing and subsequently decompressing the image, with the compression being controlled based on the location of an array of pixels relative to a defined point. Specifically this allows a degree of compression to be selected based on the position of the array of pixels, so that, less compression can be used in more important parts of an image, such as in a region proximate the point of gaze, whilst greater compression is used in other areas of the image, such as further from the point of gaze, for example in the users peripheral field of view (Fitzgerald [0209], [0212], [0286]). Furthermore, Strandborg teaches it will be appreciated that the color reproduction capabilities are significantly improved when generating the output image frames based on the gaze direction of users eye. This is attributed to the fact that colour reproduction for different output regions of the output image frames, such as the first output region and the second output region, is performed differently in an optimized manner, using the distance factor (Strandborg [0078]-[0089], [0138], [0148]-[0151]). Therefore, taking the teachings of Fitzgerald and Strandborg as whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to determine an importance factor for the given area of the given image, based on the determined extent of change and a distance of the given area from the gaze location, in order to determine the type of encoding to apply such that the best image possible can be obtained that provides the necessary detail information while minimizing the amount of data that needs to be processed by the system. Claims 2-14 are rejected under 35 U.S.C. 103 as being unpatentable over Fitzgerald et al. (US 20200280739 A1, hereinafter “Fitzgerald”) in view of Strandborg et al. (US 20210248941 A1, hereinafter “Strandborg”) and in view of Lakshminaraynan (US 20240212192 A1, hereinafter “Lakshminaraynan”). Regarding claims 2 and 9: Fitzgerald in view of Strandborg teaches further comprising reprojecting the at least one previous image from a corresponding previous pose to a given pose, prior to identifying the corresponding area in the at least one previous image, the at least one previous image and the given image being rendered according to the corresponding previous pose and the given pose, respectively. However, Lakshminaraynan teaches A “shift per requirement” (e.g., in terms of a relative position within an image frame) may be applied to first and second dynamic objects D1 and D2 during respective object frame periods. As described with reference to FIG. 3, for example, control circuit 110 may include a sub-image shifter 370 configured to determine whether and the extent to which a positional shift within an image frame should be applied to a first dynamic object D1 or to a second dynamic object D2. The shift determination may be based, at least in part, on respective motion vectors determined for dynamic objects D1 and D2, as explained with reference to FIG. 1. The motion vector may be based on, for example, a movement vector of object D1 or D2, any directional change of the gaze of the viewer's pupil(s), or any movement vector of the viewer's head or entire body. Movement of the viewer's head or body may be determined, for example, based on detected movement of control circuit 110. In some systems, a respective movement vector of a projected object may be encoded as part of the video data for object D1 or D2, or may be derived by control circuit 110 by comparing positions of a given image in one image frame relative to another (Lakshminarayanan [0023] [0127], [0137]). Therefore, taking the teachings of Fitzgerald, Strandborg and Lakshminarayanan as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to perform projection of the two images or image frames to determine the amount of different between the images, such as difference between corresponding regions, since using projection in this manner is well-known technique that when used provide a good indication and highlight the difference between images. Regarding claims 3 and 10: Fitzgerald in view of Strandborg and in view of Lakshminarayanan teaches wherein the at least one previous image comprises a plurality of previous images, the method further comprising tracking changes in the given area across a sequence of images, said sequence comprising the plurality of previous images and the given image, wherein the importance factor is calculated for the given area of the given image, further based on at least one of: an extent of the tracked changes across the sequence of images, a rate with which the changes have occurred across the sequence of images (Fitzgerald [0020], [0043], [0127]; Strandborg [0075], Lakshminarayanan [0022]-[0023] [0127], [0137], where Fitzgerald in view of Strandborg and in view of Lakshminaraynan teaches In one embodiment the image forms part of a sequence of images, and wherein the method includes using respective display data to compress and decompress at least one of: image data for a sub-sequence of one or more images; and, image data for each image). Regarding claims 4 and 11: Fitzgerald in view of Strandborg and in view of Lakshminarayanan teaches further comprising when the importance factor for the given area is greater than or equal to the first predefined threshold, but smaller than a second predefined threshold, interpolating between original values of pixels of the given area of the given image and values of corresponding pixels of the corresponding area of the at least one previous image, based on the importance factor calculated for the given area, to generate interpolated values for the pixels of the given area, and encoding the interpolated values into the encoded image data (Lakshminarayanan [0022]-[0023] , [0090], [0118], [0127], [0137], where intermediate position, for example can be interpreted as performing interpolation by one of ordinary skill in the art). Regarding claims 5 and 12: Fitzgerald in view of Strandborg and in view of Lakshminarayanan teaches further comprising when the importance factor for the given area is greater than the second predefined threshold, “encoding the original values of the pixels” of the given area into the encoded image data (Fitzgerald [0026], [0104], [0130]- [0132] claim 15; Strandborg [0012]; Lakshminarayanan [0022]-[0023] , [0090], [0118], [0127], [0137]). Regarding claims 6 and 13: Fitzgerald in view of Strandborg and in view of Lakshminarayanan teaches further comprising attaching, with the given image, metainformation indicative of at least one of: areas of the given image for which previous encoded data of their corresponding areas of the at least one previous image are to be re-used, positions of the corresponding areas of the at least one previous image, relative positions of the corresponding areas of the at least one previous image with respect to the areas of the given image, respective rotation to be applied to the corresponding areas, respective scaling to be applied to the corresponding areas (Fitzgerald [0034], [0046], [0057], [0146], [0293]; Lakshminarayanan [0022]-[0023] , [0090], [0118], [0127], [0137]). Regarding claims 7 and 14: Fitzgerald in view of Strandborg and in view of Lakshminarayanan teaches wherein for each area of the given image whose importance factor is smaller than the first predefined threshold, the encoded image data comprises a reference to previous encoded data of a corresponding area of the at least one previous image that is to be re-used for said area of the given image (Fitzgerald [0026], [0104], [0130]- [0132] claim 15; Strandborg [0012] Lakshminarayanan [0022]-[0023], [0090], [0118], [0127], [0137]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEDNEL CADEAU/Primary Examiner, Art Unit 2632 January 23, 2026
Read full office action

Prosecution Timeline

Jul 31, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586241
POSITION DETERMINATION METHOD, DEVICE, AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12573052
METHOD AND APPARATUS FOR IMAGE SEGMENTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573022
ANOMALY DETECTION FOR COMPONENT THROUGH MACHINE-LEARNING BASED IMAGE PROCESSING AND CONSIDERING UPPER AND LOWER BOUND VALUES
2y 5m to grant Granted Mar 10, 2026
Patent 12573076
POSITION MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567178
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month