Prosecution Insights
Last updated: April 19, 2026
Application No. 18/563,185

UNCONSTRAINED IMAGE STABILISATION

Final Rejection §102§103
Filed
Nov 21, 2023
Examiner
YILMAKASSAYE, SURAFEL
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Opteran Technologies Limited
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
2y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
17 granted / 34 resolved
-12.0% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
58.7%
+18.7% vs TC avg
§102
34.3%
-5.7% vs TC avg
§112
4.5%
-35.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on 08/06/2025 and 05/08/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-2, 6-9, 15-16, 19, 44-50, and 52-53 are rejected under U.S.C. 102 (a)(1) as being anticipated by Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa). 5. Regarding claim 1, a computer-implemented method for stabilizing motion data (…wherein Saa, in [0014] teaches obtaining sensor data generated as a result of sensing shaking, [0077-79] further teach a computer program to perform image correction caused by shaking…), the method comprising: receiving data associated with the objects from one or more sources (…[0005] teaches obtaining a plurality of motion vectors regarding a 360-degree image; further, [0063] gives examples of motion vectors, to include motion vectors related to an object that moves in a scene of an image…); establishing, from said data, a rotationally stable field of view using one or more techniques (…[0005] teaches obtaining three-dimensional rotation information of the 360-degree image by three-dimensionally translating the determined at least one motion vector; and correcting distortion of the 360-degree image, which is caused by shaking, based on the obtained 3D rotation information. [0043] further teaches the correcting of distortion of a 360-degree image by combining sensor data with 3D rotation information…); encoding the stable field of view based on one or more data structures (…[0077] teaches that an image processing apparatus may render and display a corrected 360-degree image or may encode and store the image…), wherein said one or more data structures comprise at least one, two or more dimensional projection (…wherein [0043] teaches the correcting of distortion of a 360-degree image by combining sensor data with 3D rotation information, [0077] teaches the encoding and display of that image; [0088] further teaches the 360-degree image may be stored as 2D image data through equi-rectangular projection, and a predefined translation may be used to translate the motion vector into the 3D rotation. The pre-defined translation may be defined in advance based on the geometry of 2D projection…); and isolating translational motion from rotational motion from said encoding to stabilise motion data, when the field of view is stabilized wherein the isolated translational motion is used to extract stable motion data (…with regards to the extraction of motion, step 107 of the instant application specifies (in [0097]) extracting motion data from the encoded stable field of view for detecting motion in said data. The extraction may be accomplished via an optic flow or like algorithms that estimate the motion of an object; further [0124] teaches isolating translational motion from rotational motion from said encoding to extract motion data. Saa, in [0057], teaches motion vectors as information used to explain a displacement of certain area of an image between a reference frame and a current frame; wherein [0063] gives examples of motion vectors, not relative to rotation, to include motion vectors related to an object that moves in a scene; as such this may be viewed as a similar specification of the instant application wherein object motion is estimated for extracting scene motion. [0059] teaches the obtaining of motion vectors which may be generated and stored during image encoding. Further, however, [0060] teaches that these motion vectors may be retrieved from a stored 360-degree image file, thus decreasing processing load. Hence, it may be said that motion vectors from encoded data are retrieved for image processing…). 6. Regarding claim 2, Saa teaches the method of claim 1 (see claim 1 above), wherein said one or more techniques comprise: algorithms configured to create a rotationally stabilised omnidirectional field of view from the said data based on the orientation of said one or more sources (…wherein the flowcharts of figures 2 and 3 may be viewed as algorithms which are used to obtain a 360 degree image; wherein [0182] teaches that such software elements of programming may be implemented with any language (e.g., C++, assembler language) with various algorithms being implemented with any combination of data structure, objects, processes, routines or other programming elements…). 7. Regarding claim 6, Saa teaches the method of claim 1 (see claim 1 above),further comprising: extracting said stable motion data from the encoded stable field of view (…Saa, in [0059], teaches the obtaining of motion vectors which may be generated and stored during image encoding. Further, [0060] teaches that these motion vectors may be retrieved from a stored 360-degree image file…). 8. Regarding claim 7, Saa teaches the method of claim 1 (see claim 1 above),wherein said one or more data structures comprise a spherical projection or a cylindrical projection (…wherein [0051] teaches according to unit sphere representation, pixels forming frames of the 360-degree image may be indexed to a three-dimensional (3D) coordinate system defining locations of respective pixels on a surface of a virtual sphere…). 9. Regarding claim 8, Saa teaches the method of claim 7 (see claim 7 above), wherein the spherical projection of the object moves with said one or more sources (…wherein Saa, in [0062], as part of explaining global rotation teaches that when a 360-degree image is captured in a vehicle that moves, global rotation may occur in the background due to the rotation of the vehicle and may occur in every part of the vehicle shown in the background and the foreground due to the rotation of the camera itself…). 10. Regarding claim 9, teaches the method of claim 7 (see claim 7 above), wherein the stable fields of view is at least partially non-stabilised in relation to a moving object (…while Saa, in [0062], teaches global rotation, [0063] teaches motion vectors related to motion of an object within scene. [0065] teaches the performance of filtering to remove a motion vector. Therefore, it may be said that a fully stabilized field of view requires both global rotation and motion vectors to be corrected…). 11. Regarding claim 15, Saa teaches the method of claim 1 (see claim 1 above), wherein said one or more data structures comprise an equi-area projection and/or locally Cartesian projection (…wherein Saa, in [0052], teaches 2D equivalent representation such as a cube map projection or an equi-rectangular projection, regarding the format in which a 360-degree image is stored...). 12. Regarding claim 16, teaches the method of claim 15 (see claim 15 above), wherein said motion data is extracted using optic flow type estimation based on properties associated with the equi-area projection and the locally Cartesian projection (…Saa, in [0057], teaches that motion vector may be information used to explain a displacement of a certain area of an image between a reference frame 401 and a current frame 402, wherein motion vectors may be obtained at points that are evenly distributed throughout the frame; as such the distributed points are viewed as optic flow which further relate to the format of the stored 360-degree image (specified in claim 7)…). 13. Regarding claim 19, Saa teaches the method of claim 1 (see claim 1 above), wherein the method is implemented on one or more processors associated with at least one of: a central processing unit (…wherein [0182] teaches that Saa’s disclosure may employ CPUs…), a graphics processing unit, a tensor processing unit, a digital signal processor, an application-specific integrated circuit, a fabless semiconductor, a semiconductor intellectual property core, or a combination thereof. 14. Regarding claim 44, Saa teaches the method of claim 1 (see claim 1 above), wherein said one or more sources comprise: at least one camera, sensor, or device suitable for receiving external data direction or indirectly (…wherein Saa, in [0157], teaches a data obtainer which may communicate with an eternal device to obtain learning data related to a 360 degree image…). 15. Regarding claim 45, Saa teaches the method of claim 1 (see claim 1 above), further comprising: receiving simulated data in relation to one or more simulations, wherein the simulated data are used to establish the stable field of view by means of insertion or superposition of the simulated data to said data (…wherein Saa, in [0075], teaches an embodiment based on artificial intelligence (AI), where machine learning mechanism is used to imitate actions of living things and obtain sensor rotation translation by using motion vectors as input data; [0157], further teaches a data obtainer which may communicate with an eternal device to obtain learning data related to a 360 degree image. Further, [0160] teaches a model learner may learn standards regarding whether to determine the 3D rotation information from the motion vectors, by using some information from the 360-degree image in layers of the learning model network. Wherein the data received for learning can be viewed as an element of simulation data, the use of information from the 360 degree image in layers can be viewed as superimposed information…). 16. Regarding claim 46, Saa teaches the method of claim 1(see claim 1 above), wherein the said data comprise: simulated data corresponding to said one or more sources for establishing the stable field of view with said data (…wherein Saa, in [0157], teaches a data obtainer which may communicate with an eternal device to obtain learning data related to a 360 degree image (simulation data); [0160], teaches a model learner which may learn standards regarding whether to determine the 3D rotation information from the motion vectors, by using some information from the 360-degree image in layers of the learning model network. As limited in claim 1, rotational information is used as part of generating distortion corrected image…). 17. Regarding claim 47, teaches the method of claim 1 (see claim 1 above), further comprising: applying one or more machine learning (ML) models to classify an object in said data based on said one or more data structures (…wherein Saa, in [0075], teaches machine learning system may be used that is the same as a learning network model that trains with regard to patterns of motion vectors in a frame having predetermined rotation; thus the patterns of motion vectors can be viewed objectively…) , wherein said one or more ML models configured to recognise the object representative of said encoding associated with the stable field of view (…further, [0059] teaches that motion vector may be generally generated and stored during an existing image encoding process…). 18. Regarding claim 48, teaches the method of claim 47 (see claim 47 above), wherein said one or more ML models are trained using data annotated with one or more objects wherein the annotated data is transformed using said one or more data structures for training the ML models (…Saa in [0151] teaches a data learner may learn standards for obtaining 3D rotation information from motion vectors regarding a 360-degree image and a data recognizer may determine the 3D rotation information from the motion vectors regarding the 360-degree image, based on the standards that are learned by the data learner…). 19. Regarding claim 49, Saa teaches the method of claim 1 (see claim 1 above), further comprising: generating a labelled output dataset for the training a machine learning model from a dataset of labelled sensor inputs (…Saa, in [0073], teaches the use of a learning network model to obtaining of rotational information based on motion vectors. Further, [0075] teaches the conversion of sensory information into a format corresponding to motor system requirements; wherein machine learning mechanism are used to imitate natural actions and obtain sensor rotation/translation by using motion vectors as input data…), wherein the machine learning model is configured to operate on said data stored and encoded by said one or more data structure (…Saa, in [0076], teaches step S240, wherein an image processing apparatus corrects distortion of a 360-degree image caused by shaking, based on the obtained 360-degree rotation information…). 20. Regarding claim 50, an apparatus for stabilising motion data (…Saa, in [0004], teaches a method and apparatus for processing a 360-degree image, including the enablement of image stabilization due to translational/rotational motion…), comprising: an interface for receiving data from one or more sources (…wherein [0169] teaches a data obtainer 1710 which may obtain at least one 360-degree image…); one or more integrated circuits (…wherein Saa teaches figures 17 and 18 which may be viewed as integrated circuits…) configured to: establish, from said data, a stable field of view using one or more techniques (…wherein Saa, in [0034], teaches correcting image distortion by means methods described in [0034]…); encode the stable field of view based on one or more data structures, wherein said one or more data structures comprise a two or more dimensional projection (…wherein Saa, in [0051], teaches pixels forming frames of the 360-degree image may be indexed to a three-dimensional (3D) coordinate system…); and isolate translational motion from rotational motion from said encoding to stabilise motion data, when the field of view is stabilised, wherein the isolated translational motion is used to extract stable motion data from the encoded stable field of view to detect object motion in said data (…Saa, in [0057], teaches motion vectors (viewed as translational motion) as information used to explain a displacement of certain area of an image between a reference frame and a current frame; wherein [0063] gives examples of motion vectors, not relative to rotation, to include motion vectors related to an object that moves in a scene; as such this may be viewed as a similar specification of the instant application wherein object motion is estimated for extracting scene motion. [0059] teaches the obtaining of motion vectors which may be generated and stored during image encoding. Further, however, [0060] teaches that these motion vectors may be retrieved from a stored 360-degree image file, thus decreasing processing load. Hence, it may be said that motion vectors from encoded data are retrieved for image processing…). 21. Regarding claim 52, Saa teaches the method of claim 1 (see claim 1 above), wherein the data comprises: a frame comprising a plurality of pixels (…wherein [0051] teaches pixels forming frames, [0043] teaches the correcting of distortion of a 360-degree image by combining sensor data with 3D rotation information …), and the method comprises: for each of at least two pixels of the frame, isolating translational motion from rotational motion from said encoding to stabilise motion data, when the field of view is stabilised, wherein the isolated translational motion is used to extract stable motion data for the pixel (…Saa, in [0057], teaches motion vectors as information used to explain a displacement of certain area of an image between a reference frame and a current frame, wherein motion vectors are obtained throughout a frame for a wide field of view; [0063] gives examples of motion vectors, not relative to rotation, to include motion vectors related to an object that moves in a scene (which may be viewed as translational motion). [0059], teaches the obtaining of motion vectors, of a frame of a 360-degree image, may be generated and stored during image encoding. Further, [0060] teaches that these motion vectors may be retrieved from a stored 360-degree image file and be reused, thus, Saa teaches isolating translational motion from rotational motion on a frame (which includes pixels) basis…and for each of the at least two pixels of the frame, mapping the pixel to a corresponding pixel in the one, two or more dimensional projection, based on the extracted stable motion data for the pixel (…wherein [0051] teaches pixels forming frames of the 360-degree image may be indexed to a three-dimensional (3D) coordinate system defining locations of respective pixels on a surface of a virtual sphere…). 22. Regarding claim 53, Saa teaches the method of claim 52 (see claim 52 above), further comprising: mapping two or more pixels of the frame to a same corresponding pixel in the one, two or more dimensional projection, based on the extracted stable motion data for the two or more pixels (…wherein two or more dimensional projection based on extracted stable motion data for two or more pixels is addressed in claim 52; Saa, in [0051], further teaches pixels forming frames of a 360-degree image may be indexed to a 3D coordinate system defining locations of respective pixels…). Claim Rejections - 35 USC § 103 23. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 24. Claim 3 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Soman et al. (US 2022/0294987 A1; further referred to as Soman). 25. Regarding claim 3, Saa teaches the method of claim 1 (see claim 1 above). Saa, however, does not further teach the method of claim 1 wherein said one or more techniques further comprise algorithms configured to correct rolling shutter from said data (…however, Soman teaches an image stabilization process which may compensate for rolling shutter distortions during image capture; wherein the process also includes compensating for effects of movements of an imaging camera device due to rotation and translation of the device (as taught in [0047]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that image stabilization can be enhanced through a means of compensating for rolling shutter distortions…). 26. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Wissenbach et al. (US 2018/0332226 A1; further referred to as Wissenbach). 27. Regarding claim 4, Saa teaches the method of claim 1 (see claim 1 above). Saa does not further specify the method of claim 1 wherein said one or more techniques process said data by iteratively adding received data in a continuous manner to establish the stable field of view (…however, Wissenbach teaches an omnidirectional camera of image stabilization and reorientation which utilizes image stabilization iterations in combination with data interpolation and/or derivation ([0010]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that iterations through accumulated data can be performed so to generate image stabilized spherical video files…). 28. Regarding claim 5, Saa in view of Wissenbach teaches the method of claim 4 (see claim 4 above), wherein the processed data is at least partially stored in memory (…wherein Saa, in [0044], teaches a processor configured to execute one or more instructions stored in memory, wherein the processor is configured to: obtain a plurality of motion vectors regarding a 360-degree image…). Saa does not further teach the method of claim 4: wherein the received data is processed in real-time without storing said data in memory (…however, Wissenbach, in [0047], teaches the capturing of live video by a group cameras in one or more consecutive frames. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that a group of cameras can be coordinated to provide a multidirectional view of an environment with the need to be observed in live mode and thus be processed accordingly…). 29. Claims 10-11, 13-14, 20-23, and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Calabretta (Mapping on the HEALPix grid). 30. Regarding claim 10, Saa teaches the method of claim 1(see claim 1 above). Though Saa, in [0052], teaches cube map projection and equi-rectangular projection, Saa does not further teach the method of claim 1 wherein said one or more data structures comprise: a Hierarchical Equal Area isoLatitude Pixelization (HEALPix) projection (…however, Calabretta teaches HEALPix projection as a simple way of storing HEALPix data on a two-dimensional square grid as used in conventional imaging and mapping (pg. 2-paragraph 3). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that HEALPix projection, as taught by Calabretta, could have been implemented in the formatting and storing of a 360-degree image...). 31. Regarding claim 11, Saa in view of Calabretta teaches the method of claim 10 (see claim 10 above),wherein the HEALPix projection applies a HEALPix double pixelisation derivative (…Calabretta teaches an extension to the HEALPix pixelisation (double-pixelization) wherein additional pixels are added to a HEALPix grid, thus increasing the total number of pixels being mapped (pg. 4-paragraph 3). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that HEALPix double-pixelization, as taught by Calabretta, can further enhance the formatting and storing of a 360-degree image...). 32. Regarding claim 13, Saa in view of Calabretta teaches the method of claim 10 (see claim 10 above), wherein said motion data is extracted using an algorithm for estimating motion (…wherein Saa, in [0114], teaches the use of an object detection algorithm to detect at least one moving object from the 360 degree image…), Wherein the algorithm is configured with respect to properties of equal pixel area and locally Cartesian nature (…wherein Calabretta, on pg. 6-paragraph 7, teaches HEALPix projections denoted in FITS with an algorithm code; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the grid of a HEALPix pixelisation or double-pixelation (see Fig. 3 and Fig. 4; Calabretta) can further be represented in a coordinate system of cartesian mapping with regards to smaller defined areas…). 33. Regarding claim 14, teaches the method of claim 10 (see claim 10 above), wherein said motion data is extracted using optical flow (…Saa, in [0057], teaches that motion vector may represent displacement information of a particular area of an image with regards to a reference frame and a current frame; this may be viewed as optical flow wherein motion may be apparent by movement of pixels between two frames…). 34. Regarding claim 20, Saa teaches the method of claim 1 (see claim 1 above). Saa does not teach the method further comprising: Extracting orthogonal bands in relation to said one or more data structure associated with a spherical projection, wherein the orthogonal bands are about the identifiable cartesian axes of the spherical projection. (…However, Calabretta teaches members of HEALPix projection wherein, e.g. in Fig 1, extraction of orthogonal bands can correspond to the rescaled projections of the members as depicted in Fig. 1. Therein, it is evident that the members of the projection (of H = 1-4) can be further defined by a cartesian axes (x,y). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that HEALPix projection, as taught by Calabretta, could have been implemented in the formatting and storing of a 360-degree image...). 35. Regarding claim 21, teaches the method claim 20 (see claim 20 above), wherein the spherical projection is HEALPix (…Calabretta teaches HEALPix projection as a simple way of storing HEALPix data on a two-dimensional square grid as used in conventional imaging and mapping (pg. 2-paragraph 3). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that HEALPix projection, as taught by Calabretta, could have been implemented in the formatting and storing of a 360-degree image...). 36. Regarding claim 22, Saa in view of Calabretta teaches the method of claim 20 (see claim 20 above), wherein the spherical projection applies a double pixelization (…wherein Calabretta teaches double-pixelisation on the HEALPix projection (pg. 4 (3.1 HEALPix double pixelisation), as depicted in Fig. 4…). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that HEALPix projection, as taught by Calabretta, could have been implemented in the formatting and storing of a 360-degree image...). 37. Regarding claim 23, Saa in view of Calabretta teaches the method of claim 20 (see claim 20 above), further comprising: applying spatial filtering on the spherical projection by use of the orthogonal bands (…wherein, Saa teaches in [0020] teaches the removing of motion vectors of a preset area through filtering; Calabretta teaches members of HEALPix projection wherein, e.g. in Fig 1, extraction of orthogonal bands can correspond to the rescaled projections of the members as depicted in Fig. 1. Therein, it is evident that the members of the projection (of H = 1-4) can be further defined by a cartesian axes (x,y). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that HEALPix projection, as taught by Calabretta, could have been implemented in the formatting and storing of a 360-degree image and thus be used for filtering purposes of motion vectors...). 38. Regarding claim 25, Saa teaches the method of claim 1 (see claim 1 above), further comprising: extracting orthogonal bands in relation to said one or more data structure associated with a cylindrical projection (…wherein Calabretta, on pg. 1-Fig. 1, denotes that the HEALPix class of projections to reveal the underlying cylindrical equal-area projection in the equatorial region…), wherein the orthogonal bands are about the identifiable Cartesian axes of the cylindrical projection in a manner to capture a direction based on vertical strips of said data (…wherein the depiction of Fig. 1 shows members (orthogonal bands) of the projection which are depicted in a cartesian grid. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that a method of mapping as taught by Calabretta can be used to implement to perform filtering of motion vectors as taught by Saa…). 39. Regarding claim 26, Saa teaches the method of claim 1 (see claim 1 above), further comprising: Extracting orthogonal bands in relation to said one or more data structures associated with a spherical projection (…wherein Calabretta teaches members of HEALPix projection wherein, e.g. in Fig 1, extraction of orthogonal bands can correspond to the rescaled projections of the members as depicted in Fig. 1…), Wherein the extracted orthogonal bands are adapted to be applied with an algorithm associated with a projection (…Calabretta, on pg. 6-paragraph 6, teaches HEALPix projections being denoted with an algorithm code…), Wherein the extracted orthogonal bands are used as an encoding on said one or more data structures (…wherein Calabretta, on pg. 6-paragraph 6, teaches HEALPix projections will be denoted1 in FITS (flexible image transport system). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the teachings of Saa could have been implemented in a mapping and data structure method of a sphere as taught by Calabretta…). 40. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Taylor (The basics of FPGA mathematics EE Times (Year: 2012)). 41. Regarding claim 17, teaches the method of claim 1 (see claim 1 above), wherein the method is implemented on a field-programmable gate array using a fixed-point implementation (…wherein Saa teaches the use of a FPGA for a software and hardware component to perform function of its specification; Taylor further teaches the representation of fixed-point number system within a design (algorithm) used in FPGA. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that fixed-point number system implemented in a FPGA could have been implemented wherein fixed-point representation maintains a decimal point within a fixed position which simplifies arithmetic operations…). 42. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Harris (Mixed-Precision Programming with CUDA 8). 43. Regarding claim 18, Saa teaches the method of claim 1 (see claim 1 above). wherein the method is implemented on a vision accelerator unit using 16-bit floating-point arithmetic (…wherein Saa, in [0044], teaches the use FPGA or computer vision accelerator chip implementation. However, Saa does not specify the use of 16-bit floating-point arithmetic of the FPGA or computer vision accelerator chip. However, Harris teaches the NVIDIA Tesla P100 (enabled by Pascal architecture) which can perform FP-16 arithmetic for deep neural network architectures; wherein, as taught on pg. 3-paragraph 3, Harris teaches that a 16-bit floating point arithmetic suffices for deep neural network for GPU related applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that a computer vision accelerator chip (as taught by Saa) may be implemented by devices as taught by thereby providing single-precision high performance computing…). 44. Claims 24 and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Calabretta (Mapping on the HEALPix grid) and further view of Kim et al. (US 2021/0366081 A1; further referred to as Kim). 45. Regarding claim 24, Saa in view of Calabretta teaches the method of claim 20 (see claim 20 above). The combined reference does not teach the method further comprising: Performing a 2D convolution on the spherical projection by generating 1D convolution around each of the orthogonal bands to improve said performance of the 2D convolution (…however, Kim teaches an image processing method using a line unit operation, including a first and second convolution operators; wherein [0115] teaches a first 1D convolution operator to perform a 1D convolution operation (wherewith as taught in [0101], a first convolution operator may generate a feature map by performing); [0162] teaches a 2D convolution operator configured to perform a 2D convolution operation (wherewith as taught in [0103] the second convolution operator may perform a convolution operation based on a feature map output in the 2D form. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that different dimensional convolutions can be applied to different parts of an image…). 46. Regarding claim 27, Saa in view of Calabretta teaches the method of claim 20 (see claim 20 above), wherein the orthogonal bands applied simultaneously to generate convolutions in a parallel manner based on a spherical projection (…wherein Calabretta teaches spherical projections broken up into members, Kim further teaches convolutional operations that may be performed. Further, Kim, in [0184], teaches components included in the image processing apparatus 10 may operate concurrently or in parallel to process an image; wherein apparatus 10 includes controller 200 (as depicted in Fig. 1), controller 200 includes a first and second convolution operators (as depicted in Fig. 2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that parallel processing methods with regards to convolution, as taught by Kim, could have been implemented with the combined teachings of Saa in view of Calabretta thus implement a fast processing pipeline structure…). 47. Regarding claim 28, Saa in view of Calabretta and further view of Kim teaches the method of claim 27 (see claim 27 above), wherein each of the orthogonal bands are segmented for parallel processing to generate convolutions associated with a spherical projection (…wherein Calabretta teaches spherical projections broken up into members, Kim further teaches convolutional operations that may be performed. Further, Kim, in [0184], teaches components included in the image processing apparatus 10 may operate concurrently or in parallel to process an image; wherein apparatus 10 includes controller 200 (as depicted in Fig. 1), controller 200 includes a first and second convolution operators (as depicted in Fig. 2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that parallel processing methods with regards to convolution, as taught by Kim, could have been implemented with the combined teachings of Saa in view of Calabretta thus implement a fast processing pipeline structure…). 48. Claims 29-32 and 42-43 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Cote et al. (US 2015/0296193 A1; further referred to as Cote). 49. Regarding claim 29, teaches the method of claim 1 (see claim 1 above). Saa doesn’t further teach the method wherein data from one or more sources are demosaiced (…however, Cote, in [0008], teaches converting raw pixel data into RGB data using demosaic. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that raw pixel data can be used so to interpolate missing color values of each pixel…). 50. Regarding claim 30, teaches the method of claim 29 (see claim 29 above). Saa doesn’t further teach the method wherein the data is RGB data corresponding to visual information (…however, Cote, in [0008], teaches converting raw pixel data into RGB data using demosaic. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that raw pixel data can be used so to interpolate missing color values of each pixel…). 51. Regarding claim 31, Saa teaches the method of claim 1 (see claim 1 above). Saa doesn’t teach the method further comprising: applying pixel binning and down sampling to create a rotationally stabilized omnidirectional field of view (…however, Cote, in [0523] teaches downsampling RGB image data and further in [0728] teaches binning to produce binned raw image pixel. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that downsampling to improve signal-to-noise ratio and further produce thumbnails by downsampling…). 52. Regarding claim 32, Saa in view of Cote teaches the method of claim 31 (see claim 31 above), wherein the pixel binning is configured for debayering said data by separately accumulating three colour channels (…wherein Cote in [0730] teaches that with regard to binned data, 2x2 pixel data may form a bayer pattern and may be determined by averaging the values of the pixels from a full resolution raw image data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that binning, as taught by Cote, is implemented as an effective way to achieve higher sensitivity of light sensing…). 53. Regarding claim 42, Saa teaches the method of claim 1 (see claim 1 above), wherein said data comprises non-RGB data, wherein the non-RGB data are associated with non-colour information (…wherein Saa, in [0151], teaches a data learner which may learn standards for obtaining 3D rotation information from motion vectors regarding a 360 degree image …). Saa, in [0014], generally teaches image data capturing with a device. However, Saa does not specify the wherein said data comprises RGB data (…however, Cote teaches RGB formatted image data processing in [0300]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the image capturing device, as taught by Saa, could have been implemented with a device that processes RGB data as taught by Cote, so to employ color filtering for better color defined pixels…). 54. Regarding claim 43, Saa in view of Cote teaches the method of claim 42 (see claim 42 above), wherein the non-RGB data comprise: data associated with spectrums of light, Light polarization information, Outputs from RADAR, LIDAR, Depth perception, Ultrasonic distance information, Temperature, Metadata such as semantic labelling, Bounding box vertices, Terrain type, or Zoning such as keepout areas, Time-to-collision information, Collision risk information, Auditory, Olfactory, Somatic, Or any other forms of directional sensor data, Intermediate processing data, Output data generated by algorithms, Or data from other external sources (…wherein Saa, in [0157], teaches a data obtainer which may communicate with an external device to obtain learning data related to a 360 degree image…). 55. Claims 33-35, and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Baxter et al. (US 2004/0095492 A1; further referred to as Baxter). 56. Regarding claim 33, Saa teaches the method of claim 1 (see claim 1 above), further comprising: applying heterogeneous sensing to the stable field of view, wherein the stable field of view is omnidirectionally established based on one or more techniques (…wherein Saa teaches a stable field of view (namely, a distortion corrected 360-degree image, Baxter teaches in an imaging system, as taught in [0009], including a VASI subsystem with the capability of retaining the highest possible spatial resolution on regions of interest that are important to the overall system. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention that a variable acuity superpixel imager (VASI) could have been implemented as a heterogeneous method of sensing received light by an imager with different resolution, so to provide better definition to particular areas of an image…). 57. Regarding claim 34, Saa in view of Baxter teaches the method of claim 33 (see claim 33 above), wherein heterogeneous sensing comprises: encoding at least part of the stable field of view at a higher spatial resolution (…wherein Saa teaches encoded stable field of view (in claim 1), Baxter, in [0006], further teaches the sensing of high spatial resolution imagery at very high frame rates, covering wide fields of view of low data bandwidth. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that sensed data as taught by Baxter could have been encoded for data processing as taught Saa…). 58. Regarding claim 35, Saa in view of Baxter teaches the method of claim 33 (see claim 33 above), further comprising: sampling over regions of the stable field of view dynamically based on the heterogeneous sensing (…wherein Baxter, in [0010], teaches a dynamically defined super-pixel which selectively generates a control signal for elements of the dynamically defined super-pixel. [0013] further teaches, with regards to fig. 1a, high-resolution pixels in a "foveal" region as well as lower-resolution superpixels that are the result of agglomeration of "standard" pixels. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that a processor can selectively determine a pixel with a high spatial resolution on regions of interest in a dynamic manner…). 59. Regarding claim 37, Saa in view of Baxter teaches the method of claim 33 (see claim 33 above), wherein the heterogeneous sensing is configured to sample more frequently from a region of interest from said data to provide a sampling rate associated with said region, wherein said region associated with a higher sampling rate can be dynamically movable and resizable on the stable field of view, wherein different regions comprises different sampling rates (… Baxter, in [0013] with regards to fig. 1a, teaches high-resolution pixels in a "foveal" region as well as lower-resolution superpixels that are the result of agglomeration of "standard" pixels; [0027] in accordance with fig. 1b, teaches two high-spatial resolution regions (14, 15) that may be dynamically changed; wherein, in accordance with fig. 1a, superpixel 10 defines regions of the field of view that are of background peripheral regions which are not as significant as image components which are sampled by high-spatial resolution foveal pixels 11. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that a variable acuity superpixel imager (VASI) could have been implemented as a heterogeneous method of sensing received light by an imager with different resolution, so to provide better definition to particular areas of an imager, as such by defining certain regions with a higher-resolution than other regions, as taught by Baxter…). 60. Claims 36 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Baxter et al. (US 2004/0095492 A1; further referred to as Baxter) and further view of Calabretta (Mapping on the HEALPix grid). 61. Regarding claim 36, Saa in view of Baxter teaches the method of claim 33 (see claim 33 above), wherein the heterogeneous sensing is applied in relation to a HEALPix projection or a double pixelisation (…wherein Calabretta teaches HEALPix projection or a double pixelisation, see Fig. 4 of Calabretta, it would be feasible to combine the concept to the teaching of Baxter wherein heterogeneous sensing can be implemented. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention that a variable acuity superpixel imager (VASI) could have been implemented as a heterogeneous method of sensing received light by an imager with different resolution, so to provide better definition to particular areas of an imager…). 62. Regarding claim 38, Saa in view of Baxter teaches the method of claim 33 (see claim 33 above), further comprising: dividing one or more HEALPix pixels of said data to increase spatial resolution of the stable field of view (…Baxter, in [0013] with regards to fig. 1a, teaches high-resolution pixels in a "foveal" region as well as lower-resolution superpixels that are the result of agglomeration of "standard" pixels; further, Calabretta teaches HEALPix double-pixelisation with increased interpolated pixels. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that increased resolution can be achieved by means of double-pixelisation as taught by Calabretta, thus to provide better focus on certain regions of focus…). 63. Claims 39-41 are rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Shichijo et al. (US 2019/0318151 A1; further referred to as Shichijo). 64. Regarding claim 39, Saa teaches the method of claim 1 (see claim 1 above), further comprising: Identifying an area of interest on an encoded stable field of view based on a N-1th data frame of said data (…wherein Saa teaches the encoding of a stable field of view in accordance with claim 1; Shichijo teaches an area determination method, as taught in [0041]…), wherein said data comprise at least a plurality of data frames (…wherein Shichijo , in [0040-0041], teaches the storing of image data of image frames thus to be read and to detect an area…); mapping said area of interest to a Nth data frame of said data (…wherein [0041] teaches area detector which uses a template matching method to search for a particular area, “moving a position of a face reference template stepwise with respect to the image data at a predetermined number of pixel intervals (thus viewed as mapping…); and extracting a subset of data from said data based on the mapping (…wherein [0041] further teaches that when an image area is matched to a reference template, the area is extracted (a rectangular frame is used to extract the face image area. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that an area of interest (e.g. an area wherein a particular object may be located) may be determined to be identified, mapped, and extracted as taught by Shichijo so to effectively focus an area of interest…). 65. Regarding claim 40, Saa in view of Shichijo teaches the method of claim 39 (see claim 39 above), wherein the extracted subset of data is represented by a 2D image independent of the encoded stable field of view (…wherein Shichijo, in [0097], teaches the positions of feature points as objects to be detected on a two-dimensional plane. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that an area of interest (e.g. an area wherein a particular object may be located) may be determined to be identified, mapped, and extracted as taught by Shichijo so to effectively focus an area of interest…). 66. Claim 41 is rejected under 35 U.S.C. 103 as being unpatentable over Saa-Garriga et al. (US 2021/0142452 A1; further referred to as Saa) in view of Baxter et al. (US 2004/0095492 A1; further referred to as Baxter) and further view of Wissenbach et al. (US 2018/0332226 A1; further referred to as Wissenbach). 67. Regarding claim 41, the method of claim 39 (see claim 39 above), wherein the mapping is continuously updated to implement maximal-resolution heterogeneity (…wherein Saa, in [0051], teaches pixels forming frames of the 360-degree image may be indexed to a three-dimensional (3D) coordinate system defining locations of respective pixels on a surface of a virtual sphere; Baxter teaches an imaging system, as taught in [0009], including a VASI subsystem with the capability of retaining the highest possible spatial resolution on regions of interest that are important to the overall system; Wissenbach, additionally in [0047], teaches the capturing of live video by a group cameras in one or more consecutive frames, wherein the mapping is at least partially adapted to encode the stable field of view (…Saa, in [0051], teaches pixels forming frames of the 360-degree image may be indexed to a three-dimensional (3D) coordinate system defining locations of respective pixels on a surface of a virtual sphere…). Conclusion 68. Any inquiry concerning this communication or earlier communications from t
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Sep 12, 2025
Non-Final Rejection — §102, §103
Dec 17, 2025
Response Filed
Apr 11, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12538047
Ambient Light Sensing with Image Sensor
2y 5m to grant Granted Jan 27, 2026
Patent 12506981
PHOTOELECTRIC CONVERSION APPARATUS, METHOD FOR CONTROLLING PHOTOELECTRIC CONVERSION APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12495224
IMAGE SENSING DEVICE AND IMAGE PROCESSING METHOD OF THE SAME
2y 5m to grant Granted Dec 09, 2025
Patent 12470797
OPTICAL ELEMENT DRIVING MECHANISM
2y 5m to grant Granted Nov 11, 2025
Patent 12452534
CONTROL APPARATUS, LENS APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PICKUP SYSTEM, CONTROL METHOD, AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
84%
With Interview (+33.6%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month