Prosecution Insights
Last updated: April 19, 2026
Application No. 18/699,333

HANDLING BLUR IN MULTI-VIEW IMAGING

Non-Final OA §103§112
Filed
Apr 08, 2024
Examiner
LEE, JIMMY S
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Koninklijke Philips N V
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1 and 9 objected to because of the following informalities: The fifth limitation of claim 1 and sixth limitation of claim 9 both dependent claims recite “at least one pixels of the corresponding image” which is grammatically incorrect and requires correction. Until this is corrected, for the purposes of examination, the claimed “at least one pixels of the corresponding image” will be interpreted as “at least one pixel Claim 7 and 15 objected to because of the following informalities: The first limitations of both dependent claims recite “two depth confidence maps corresponding to the of the at least two depth maps”, which appear to be incompletely recited since there is an apparent grammatical issue relating to how the parts of the limitation correspond to one another. In particular, the offending phrasing is the “corresponding to the of the at least two depth maps”. Until this is corrected, for the purposes of examination, the claimed “two depth confidence maps corresponding to the of the at least two depth maps” will be interpreted as “two depth confidence maps corresponding to the Claim 10 objected to because of the following informalities: The third limitations of the dependent claims recite “target viewpoint to so as generate a synthesized image” which is grammatically incorrect and requires correction. Until this is corrected, for the purposes of examination, the claimed “target viewpoint to so as generate a synthesized image” will be interpreted as “target viewpoint to generate a synthesized image”. Claim 16 and 17 objected to because of the following informalities: The third limitation claim 16 and the fourth limitation of claim 17 both relate to comparing/compare “the at the at least two depth maps”, respectively, which is an apparent grammatical issue which duplicates the “the at” phrasing and requires correction. Until this is corrected, for the purposes of examination, the claimed comparing/compare “the at the at least two depth maps” will be interpreted as claimed comparing/compare “viewpoint . Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 and 11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the claims refer to “the pixel color values” without previously disclosing any form of pixel color values previously, rendering the limitation indefinite. For the purposes of examination, the first instance of “the pixel color values” will be interpreted as “Additionally, the dependent claims 3 and 11 refer to “a confidence score” while having already disclosed a confidence score previously in the claims which the dependent claims depend on, claims 1 and 9, respectively. This inconsistency renders the second reference of “a confidence score” as redundant and unclear if this refers to the initial confidence score or a new confidence score, rendering this particular confidence score indefinite. For the purposes of examination, “a confidence score” disclosed on claims 3 and 11 will be interpreted as “the confidence score” Claim 15 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the claims refer to “the at least one depth comparison viewpoint” without previously disclosing an initial depth comparison viewpoint, rendering the limitation indefinite. For the purposes of examination, “the at least one depth comparison viewpoint” will be interpreted as “ Claim 16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the claims refer to “at each of the at least one image comparison viewpoints” without previously disclosing an initial image comparison viewpoint, rendering the limitation indefinite. For the purposes of examination, “at each of the at least one image comparison viewpoints” will be interpreted as “at one image comparison viewpoint Claim 17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the claims refer to “at each of the at least one image comparison viewpoints” without previously disclosing an initial image comparison viewpoint, rendering the limitation indefinite. For the purposes of examination, “at each of the at least one image comparison viewpoints” will be interpreted as “at one image comparison viewpoint Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 1,8-9 rejected under 35 U.S.C. 103 as being unpatentable over Aswin; Buddy (US 20190122378 A1) in view of Wang; Demin et al. (US 20110069237 A1) Regarding claim 1, Aswin teaches, A method (¶24 and fig. 3, process executed by hardware to “generate a 3D structure data file from 2D image input” as depicted in fig. 3) comprising: obtaining at least two images of a scene, (¶36-38, “two two-dimensional (2D) images” taken from multiple 2D images at different locations) wherein each of the at least two images is from a different camera; (¶37-38,35, and fig. 3, 2D images (1001) “output from digital imager systems (camera o 21 and camera p 23)” taken from multiple 2D images at different locations such as from digital imager systems 21 and 23 depicted in fig. 2) determining a sharpness indication for each of the at least two images, (¶49-53 and fig. 3, Step 211, depicted in fig. 3, determines “relative blur estimates (RBEs) around MPs or PCs” based on “pairs images (1001 or 1003)”) wherein the sharpness indication is a sharpness map, (¶49-50 and fig. 3, relative blur estimates (RBEs) created “around each MPs or CPs”) wherein the sharpness map (¶49-50 and fig. 3, relative blur estimates (RBEs) created “around each MPs or CPs”) comprises a plurality of sharpness values, (¶49-50, “relative blur estimates (RBEs)”) determining a confidence score (¶54 and fig. 3, Step 213, depicted in fig. 3, computes “relative blur ratio”) for each of the at least two images (¶54, relative blur ratio used in describing “relationships between MPs/PCs and cameras o 21, p 23”) based on the sharpness indication (¶54 and fig. 3, “compute relative blur ratio using sets of RBEs”) determining weights (¶54, “Output (1013): DS with coefficients of multivariate PAs and variables”) based on the confidence score; (¶54, DS with coefficients of multivariate PAs and variables that describe “relationship between relative blur radius associated with different MPs or PCs and depth z coordinate of MP and PCs” as output after computing “relative blur ratio using sets of RBEs”) and But does not explicitly teach, wherein each of the plurality of sharpness values corresponds to at least one pixels of the corresponding image; blending the at least two images so as to synthesize a new virtual image via view-point interpolation based on the weights. However, Wang teaches additionally, wherein each of the plurality of sharpness values (¶44 and fig. 6, “pixels of the hole 606” depicted in fig. 6) corresponds to at least one pixels of the corresponding image; (¶44 and fig. 6, pixels of the hole 606 assigned “values derived from the values of the pixels 602 around the hole 606” depicted in fig. 6) blending the at least two images (¶38, “forward and backward-interpolated images”) so as to synthesize a new virtual image (¶38, forward and backward-interpolated images are then “combined into a single interpolated image”) via view-point interpolation based on the weights. (¶38, “weighted averaging is used to combine the images” based on weighted “pixel values of the forward-interpolated image” and “pixel values of the backward-interpolated image”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang that combines two weighted images into a single interpolated image. This technique reduces the area of holes and increases image quality. Regarding claim 8, Aswin with Wang teach the limitations of claim 1, Aswin teaches additionally, A computer program (¶32 and fig. 1, “machine instructions”) stored on a non-transitory medium, (¶32 and fig. 1, “machine instructions” 13 stored on the data storage unit 11) wherein the computer program when executed on a processor performs the method as claimed in claim 1. (¶32,24, fig. 1 and 3, “machine instructions” and hardware used to “generate a 3D structure data file from 2D image input”) Regarding claim 9, it is the device claim of method claim 1. Aswin teaches additionally, A device (Title, “apparatuses” for machine vision systems) comprising: a processor circuit (¶32 and fig. 1, “processor 32” depicted in fig. 1) and a memory circuit, (¶32 and fig. 1, “data storage unit 11” depicted in fig. 1) wherein the memory is arranged to store instructions (¶32 and fig. 1, “machine instructions” 13 stored on the data storage unit 11) for the processor circuit (¶32,24, fig. 1 and 3, “machine instructions” and hardware used to “generate a 3D structure data file from 2D image input”) Refer to the rejection of claim 1 to teach the additional limitations of claim 9. Claim 2,6,10,14,16-17 rejected under 35 U.S.C. 103 as being unpatentable over Aswin; Buddy (US 20190122378 A1) in view of Wang; Demin et al. (US 20110069237 A1) in view of GEORGE; James et al. (US 20210375044 A1) Regarding claim 2, Aswin with Wang teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 2, However, George teaches additionally, obtaining at least one depth map of the scene; (¶89-90 and fig. 1, “refinement systems 220” such as systems 222, 224, 228 receiving “depth image” 150 from depth image streams 158, 160, 162 as depicted in fig. 1) warping the at least two images to a target viewpoint (¶92,89, and fig. 1, refinement system 220 “reprojects the depth image into the color image based on the calibration information corresponding to the respective camera system” and “segment the rectified depth image by the color image segmentation stream” occurring for each refinement system 220 222, 224, 228) based on the at least one depth map; (¶92,89, and fig. 1, “depth image”) and blending the at least two images at the target viewpoint (¶96, “output by the one or more refinement systems 220, may be combined into a geometry video stream 120”) so as to generate a synthesized image, (¶96, “geometry video stream 120”) wherein each pixel in the at least two images is weighted based on the corresponding confidence score. (¶109, “viewing angle from a first camera and compare the viewing angle to the direction from a second camera to acquire a contribution factor for every pixel” considering the scene’s virtual perspective where “a weighting system may be applied that weighs the content samples based on the blending factor that optimizes for content angles close to the virtual perspective” to perform the “per pixel-weighted” blending process) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George which utilizes depth images to combine images. This allows for high accuracy of a blended image. Regarding claim 6, Aswin with Wang teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 6, However, George teaches additionally, obtaining at least two depth maps, (¶73 and fig. 1, depth image streams 150 including “depth image streams 158, 160, 162” depicted in fig. 1) wherein each of the at least two depth maps are obtained from different sensors; (¶73,72 and fig. 1, “depth image streams 158, 160, 162 taken from different perspectives” depicted in fig. 1 through hardware elements for capturing depth information including “Microsoft Kinect” or some form of “depth sensor” from different perspectives) warping each of the at least two depth maps (¶96-97, deferred surface reconstruction engine 130 receiving “scene is captured from two or more perspectives” to combine the received inputs to “generate a surface stream”) to at least one depth comparison viewpoints (¶96-97, “a surface stream”) such that there are at least two depth maps (¶96-97 and 92, “combine the received inputs” which are “two or more perspectives” output by the one or more refinement systems 220 which output “a depth and color stream” for each perspective) at each of the at least one image comparison viewpoints; (¶96-97, combine the received inputs “output by the one or more refinement systems 220” of multiple depth and color perspective streams to “generate a surface stream”) comparing the at the at least two depth maps (¶96-97 and 109, deferred surface reconstruction engine 130 uses “view-dependent texture blending process” by using “viewing angle from a first camera and compare the viewing angle to the direction from a second camera” with multiple depth perspective streams) at each of the at least one depth comparison viewpoints; (¶96-97 and 109, “use the viewing angle from a first camera and compare the viewing angle to the direction from a second camera”) and determining a confidence score (¶109, “acquire a contribution factor for every pixel”) for each of the at least two depth maps (¶109 and 96-97, using the “viewing angle from a first camera” and viewing angle “from a second camera”) based on the comparison of the depth maps. (¶109, use the viewing angle from a first camera and compare the viewing angle to the direction from a second camera to acquire a contribution factor from the input “multiple depth” perspective streams) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George which utilizes depth images to combine images. This allows for high accuracy of a blended image. Regarding claim 10, dependent on claim 9, it is the device claim of method claim 2, dependent on claim 1. Refer to rejection of claim 2 to teach the limitations of claim 10. Regarding claim 14, dependent on claim 9, it is the device claim of method claim 6, dependent on claim 1. Refer to rejection of claim 6 to teach the limitations of claim 14. Regarding claim 16, Aswin with Wang teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 16, However, George teaches additionally, obtaining at least two depth maps, (¶72-73 and fig. 1, “capturing depth information” as depth image streams 150 including “depth image streams 158, 160, 162” from separate inputs 102 as depicted in fig. 1) wherein each of the at least two depth maps are generated from different images of the scene; (¶72-73 and fig. 1, capturing depth information including “depth image streams 158, 160, 162 taken from different perspectives” from separate inputs 102 as depicted in fig. 1) warping each of the at least two depth maps (¶96-97, deferred surface reconstruction engine 130 receiving “scene is captured from two or more perspectives” to combine the received inputs to “generate a surface stream”) to at least one depth comparison viewpoints (¶96-97, “a surface stream”) such that there are at least two depth maps (¶96-97 and 92, “combine the received inputs” which are “two or more perspectives” output by the one or more refinement systems 220 which output “a depth and color stream” for each perspective) at each of the at least one image comparison viewpoints; (¶96-97, combine the received inputs “output by the one or more refinement systems 220” of multiple depth and color perspective streams to “generate a surface stream”) comparing the at the at least two depth maps (¶96-97 and 109, deferred surface reconstruction engine 130 uses “view-dependent texture blending process” by using “viewing angle from a first camera and compare the viewing angle to the direction from a second camera” with multiple depth perspective streams) at each of the at least one depth comparison viewpoints; (¶96-97 and 109, “use the viewing angle from a first camera and compare the viewing angle to the direction from a second camera”) and determining a confidence score (¶109, “acquire a contribution factor for every pixel”) for each of the at least two depth maps (¶109 and 96-97, using the “viewing angle from a first camera” and viewing angle “from a second camera”) based on the comparison of the depth maps. (¶109, use the viewing angle from a first camera and compare the viewing angle to the direction from a second camera to acquire a contribution factor from the input “multiple depth” perspective streams) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George which utilizes depth images to combine images. This allows for high accuracy of a blended image. Regarding claim 17, dependent on claim 9, it is the device claim of method claim 16, dependent on claim 1. Refer to rejection of claim 16 to teach the limitations of claim 17. Claim 3-5,7,11-13 rejected under 35 U.S.C. 103 as being unpatentable over Aswin; Buddy (US 20190122378 A1) in view of Wang; Demin et al. (US 20110069237 A1) in view of GEORGE; James et al. (US 20210375044 A1) in view of Taya; Kaori (US 20190342537 A1) Regarding claim 3, Aswin with Wang teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 3, However, George teaches additionally, obtaining at least one depth map of the scene; (¶89-90 and fig. 1, “refinement systems 220” such as systems 222, 224, 228 receiving “depth image” 150 from depth image streams 158, 160, 162 as depicted in fig. 1) warping at least one image to at least one image comparison viewpoints (¶92,89, and fig. 1, refinement system 220 “reprojects the depth image into the color image based on the calibration information corresponding to the respective camera system” and “segment the rectified depth image by the color image segmentation stream” that outputs “depth stream” and “color stream” for each refinement system 220 (222, 224, 228)) using the at least one depth map (¶92,89, and fig. 1, “depth image”) such that there are at least two warped images (¶92 and fig. 1, “output a depth and color stream” depicted in fig. 1) at each of the at least one image comparison viewpoints; (¶92, output a depth and color stream for “each perspective”) comparing the pixel values (¶109 and 103, texture blending including using “viewing angle from a first camera and compare the viewing angle to the direction from a second camera to acquire a contribution factor for every pixel” such that the blended texture pack contains the weighted “color” content contributed to each pixel) of the at least two images at each of the at least one comparison viewpoints (¶99,109, and fig. 1, “reconstruction engine 130” depicted in fig. 1 implementing view-dependent “texture blending” between various input streams including refined “multiple-perspective streams”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George which utilizes depth images to combine images. This allows for high accuracy of a blended image. but does not explicitly teach, comparing the pixel color values of the at least two warped images wherein determining a confidence score for each image of the at least two warped images is based on the comparison of the pixel color values. However, Taya teaches additionally, comparing the pixel color values (¶60, foreground-background separation unit 605 determines the absolute value of a difference therebetween “color” in “mutually corresponding pixels” of the images) of the at least two warped images (¶60, “mutually corresponding pixels of the image (long Tv image) 802 and the image (background image) 804”) at each of the at least one comparison viewpoints (¶18 and fig. 1, “images of a subject 105 from viewpoints in a plurality of directions” of a region from cameras 101 depicted in fig. 1) wherein determining a confidence score for each image of the at least two warped images (¶60, determination separates “region to which white (1) is allocated serves as a foreground region, and the region to which black (0) is allocated serves as a background region”) is based on the comparison of the pixel color values. (¶60, separation based on absolute difference “in mutually corresponding pixels of the image (long Tv image) 802 and the image (background image) 804, the absolute value of a difference”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George with the comparison of Taya which compares corresponding pixels between pictures. This allows for appropriate generation even in virtual viewpoint images. Regarding claim 4, Aswin with Wang with George with Taya teaches the limitations of claim 3, George teaches additionally, wherein the at least one image comparison viewpoints (¶92,89, and fig. 1, refinement system 220 “reprojects the depth image into the color image based on the calibration information corresponding to the respective camera system” and “segment the rectified depth image by the color image segmentation stream” that outputs “depth stream” and “color stream” for each refinement system 220 (222, 224, 228)) comprise all of the viewpoints of the at least two warped images (¶92,89 and fig. 1, refinement system 220 outputs for each perspective which input “various depth image, color image and segmentation streams 150, 152, 154” as depicted in fig. 1) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George with the comparison of Taya which utilizes depth images to combine images. This allows for high accuracy of a blended image. Regarding claim 5, Aswin with Wang with George with Taya teach the limitations of claim 3, George teaches additionally blending the at least two warped images at a target viewpoint (¶96, “output by the one or more refinement systems 220, may be combined into a geometry video stream 120”) so as to generate a synthesized image, (¶96, “geometry video stream 120”) wherein the at least one image comparison viewpoints is the target viewpoint, (¶143, “corresponding 2D dimensional rendering 604 of the person captured by one of the input cameras” used to “view the 3D rendering”) wherein each pixel in the at least two warped images is weighted based on the corresponding confidence score. (¶109, “viewing angle from a first camera and compare the viewing angle to the direction from a second camera to acquire a contribution factor for every pixel” considering the scene’s virtual perspective where “a weighting system may be applied that weighs the content samples based on the blending factor that optimizes for content angles close to the virtual perspective” to perform the “per pixel-weighted” blending process) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George with the comparison of Taya which utilizes depth images to combine images. This allows for high accuracy of a blended image. Regarding claim 7, Aswin with Wang with George teach the limitations of claim 6, But does not explicitly teach the additional limitations of claim 7, However, Taya teaches additionally, obtaining at least two depth confidence maps (¶60-61 and fig. 7, “separates the long Tv image data into a long Tv foreground region and a long Tv background region” and “separates the short Tv image data into a short Tv foreground region and a short Tv background region”) corresponding to the of the at least two depth maps; (¶60-61 and 37, “long Tv image data” and “shot Tv image data” used to identify the position of the subject from a captured image) and warping each of the at least two depth confidence maps (¶62-63 and fig. 7, estimate the “shape of a foreground region obtained” in the “long Tv foreground regions” and “short Tv foreground regions”) to the at least one depth comparison viewpoint (¶62-63, estimate the shape of a foreground region obtained “based on an overlapping region of multi-viewpoint long Tv foreground regions” and “based on an overlapping region of multi-viewpoint short Tv foreground regions”) with the corresponding one of the at least two depth maps, (¶60-61,37, and fig. 7, long Tv foreground region of the “long Tv image data” separated at step S703 and short Tv foreground region of the “short Tv image data” separated at step S704 as disclosed in fig. 7) wherein comparing the at least two depth maps (¶66 and 60-63, “absolute value of a difference between pixel values” corresponding to pixels of long Tv virtual viewpoint image corresponding to “long Tv foreground region” and pixels of short Tv virtual viewpoint image corresponding to “short Tv foreground region”) at each depth comparison viewpoint (¶66 and 60-63, “long Tv image date” and “short Tv image data”) further comprises comparing the corresponding one of the at least two depth confidence maps. (¶66,60-63, and fig. 7, “motion blur amount calculation unit 611 calculates a motion blur amount based on the magnitude of the absolute value of a difference between pixel values of mutually corresponding pixels of the long Tv virtual viewpoint image and the short Tv virtual viewpoint image” that correspond to long Tv foreground regions and short Tv foreground regions) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the vision system of Aswin with the interpolation of Wang with the blending of George with the comparison of Taya which calculates blur based on difference between pixels of separate viewpoint images. This allows for high accuracy of a blended image. Regarding claim 11, dependent on claim 9, it is the device claim of method claim 3, dependent on claim 1. Refer to rejection of claim 3 to teach the limitations of claim 11. Regarding claim 12, dependent on claim 11, it is the device claim of method claim 4, dependent on claim 3. Refer to rejection of claim 4 to teach the limitations of claim 12. Regarding claim 13, dependent on claim 11, it is the device claim of method claim 5, dependent on claim 3. Refer to rejection of claim 5 to teach the limitations of claim 13. Claim 15 rejected under 35 U.S.C. 103 as being unpatentable over Aswin; Buddy (US 20190122378 A1) in view of Wang; Demin et al. (US 20110069237 A1) in view of Taya; Kaori (US 20190342537 A1) Regarding claim 15, dependent on claim 9, it is the device claim of method claim 7, dependent on claim 6. Refer to rejection of claim 7 to teach the limitations of claim 15. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483 /JIMMY S LEE/Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Apr 08, 2024
Application Filed
Oct 31, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month