DETAILED ACTION
This action is in response to the application filed on September 29, 2023. Claims 1-20 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on September 29, 2023 is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claims 1, 10, and 19 these claims recite the following limitations which are found to be abstract ideas not reciting a practical application or significantly more, with claim 1 being exemplary:
detecting motion in the NR image (abstract idea as a mental process as a human mind is capable of detecting motion in an image);
generating an updated activity score map by incrementing the activity score map based on the detected motion in the NR image (abstract idea as mathematical concepts, mathematical relationships, mathematical formulas or equations, mathematical calculations);
performing clustering on the updated activity score map to identify a region of interest (ROI) in the NR image (abstract idea as mathematical concepts, mathematical relationships, mathematical formulas or equations, mathematical calculations);
generating dewarping information of the ROI based on a constraint of the NR camera (abstract idea as mathematical concepts, mathematical relationships, mathematical formulas or equations, mathematical calculations),
This judicial exception is not integrated into a practical application for the following reasons. Claims 1, 10, and 19 recites the additional element of “outputting the dewarping information of the ROI,” which while not necessarily being an abstract idea, is insignificant pre/post-solution extra activity since they are merely data output (ses MPEP 2106.05(g)). Moreover, these elements amount to outputting data in a computer based system and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II.
Claims 10 and 19 further recites the additional elements of “a non-transitory computer-readable storage medium,” and “processor,” respectively. While these limitations include additional elements they are not sufficient to recite a practical application of the abstract ideas recited in claims 10 and 19, as they amount to mere generic computer elements and thus amount to no more than a recitation of the words “apply it” (or an equivalent) or are no more than the mere instructions to implement an abstract idea or other exception on a computer. See MPEP 2106.05(f).
Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional element from claims 10 and 19 do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions.
Therefore, independent claims 1, 10, and 19 are directed towards an abstract idea without a practical application or significantly more.
Regarding claims 2-8 and 11-18, the limitations are merely directed towards insignificant pre/post-solution extra activity that nonetheless do not integrate the abstract idea recited from claim 1 into a practical application.
Regarding claims 9 and 20 the limitations are merely directed towards abstract ideas as mathematical concepts, mathematical relationships, mathematical formulas or equations, mathematical calculations without reciting a practical application or significantly more.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-8 and 10-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sririam et al, US 20230016568.
Regarding claim 1, Sririam teaches
A method of processing a video stream (see Sririam, Fig. 2, and Paragraph [0060], “In the example of FIG. 2, the area 200 includes cameras 228, 230, 232, 234, 236, 238, 240, 242, 244, and 246,” and “one or more of the cameras may be general surveillance cameras or other types of cameras used to capture image data (e.g., video data)”) from a non-rectilinear (NR) camera (see Sririam, Fig. 2, and Paragraphs [0062]-[0063], “one or more of these cameras may be fisheye lens cameras, and may be installed on a ceiling or a wall of a structure (or a pole for outdoor areas) such that the field of view of each camera may include at least portions of one or more rows of spots as well as the aisles leading to the row(s). In FIG. 2, the cameras 228, 230, 232, 234, 236, 238 may be examples of such cameras”),
the method comprising: obtaining an activity score map that corresponds to a view of the NR camera (see Sririam, Paragraph [0048], “The visualization system 106 may receive data from the semantic analysis system 104 and/or the perception system 102 to generate and present one or more visualizations related to the area 200, which may be represented by visualization data 166 (e.g., occupancy heat maps, dashboards, 3D recreations of the states of the area 200, etc),” visualization data 166 (e.g., occupancy heat maps, dashboards, 3D recreations of the states of the area 200, etc) is considered to be an activity score map);
obtaining, from the NR camera, an NR image (see Sririam , Paragraph [0049], “one or more fisheye images generated using the sensor(s)—such as the image 164A”) that includes the view of the NR camera (see Sririam, Paragraph [0042], “These approaches may receive first image data representative of a first field of view of a first image sensor and second image data representative of a second field of view of a second image sensor.”);
detecting motion in the NR image (see Sririam, Paragraph [0049], “The intra-feed object tracker 120 may be configured to track motion of objects within a feed of the sensor data 162—such as a single-camera feed”);
generating an updated activity score map by incrementing the activity score map based on the detected motion in the NR image (see Sririam, Paragraph [0106], “The visualization system 106 may use the updates to update a 3D rendering of the area with one or more objects. While the updates may be periodic, the 3D rendering may interpolate one or more updated values (e.g., location and orientation data of an object) in order to display a smoother transition”);
performing clustering on the updated activity score map to identify a region of interest (ROI) in the NR image (see Sririam, Paragraph [0140], “a clustering algorithm may be configured to cluster the detected object locations into six clusters (e.g., using k-means), or at most six clusters. The object locations within a cluster may then be combined to form a representative ROI for a designated spot. For example, where a particular object or similar object (e.g., same make and model of vehicle) occupies the field of view at a higher frequency than a sufficiently different object, the ROI indicator line 404A may resemble that particular object or similar object more than the different object,” an object that occupies the field of view at a higher frequency would be shown on updated visualization data 166 in which a clustering algorithm may be configured);
generating dewarping information of the ROI based on a constraint of the NR camera (see Sririam, Paragraph [0050], “The camera calibrator 130 may be used to determine and/or define the one or more surfaces and/or dewarping parameters for dewarping the fisheye image(s),” dewarping parameters are considered to be dewarping information; fisheye lens are considered to be based on a constraint of the NR camera),
wherein the dewarping information includes parameters to convert the ROI into a rectilinear output (Sririam, Paragraph [0076], “the sensor data processor 112 may dewarp the image 164A to generate the surfaces 302, 304, and 406, shown in FIGS. 3A, 4A, and 4B,” surfaces are considered to be a rectilinear output);
and outputting the dewarping information of the ROI (see Sririam, Paragraph [0078], “the object detector 114 may output coordinates that define bounding boxes 402A, 402B, 402C, 402D, and 402F of FIG. 4A, each corresponding to a detected object,” output coordinates is considered outputting the dewarping information of the ROI).
Regarding claim 2, Sririam further teaches the method of claim 1,
further comprising: using a projection model to convert the ROI into the rectilinear output based on the dewarping information (see Sririam, Paragraph [0155], “Where the sensor data processor 112 performed dewarping, this may be performed using cylindrical or other geometrical projections”);
and inputting the rectilinear output into an image recognition algorithm (see Sririam, Paragraph [0078], “for the surfaces 302 and 304 that correspond to a spot and/or row of spots the object detector 114 may use one or more machine learning models (“MLMs”)—such as, but without limitation a deep neural network architecture—trained to detect the front or back of a parked vehicle(s) and/or other type of object,” object detector 114 is considered the image recognition algorithm),
wherein generating the dewarping information of the ROI is further based on a constraint of the image recognition algorithm (see Sririam, Paragraph [0071], “Some or all of the image recognition techniques may be executed as processing tasks by one or more graphical processing units (GPUs) in the local one or more computing devices”),
and the dewarping information of the ROI includes boundary limits within in the NR image (see Sririam, Fig. 4A, and Paragraph [0080], “For the surface 302, the ROI calibration data may be representative of ROI lines 408A, 408B, 404C, 404D, 404E, and 404F corresponding to parking spots 202A, 202B, 202C, 202D, 202E, and 202F, respectively,” ROI lines are considered to be boundary limits).
Regarding claim 3, Sririam further teaches the method of claim 2,
wherein the boundary limits are based on an orientation constraint of the image recognition algorithm (see Paragraph [0132], “By allowing the shapes to rotate, they may more realistically fit to an object (e.g., the back or front of a car) in a warped image. This may be achieved by training the object detector 114 using rotated bounding boxes”).
Regarding claim 4, Sririam further teaches the method of claim 1,
wherein the constraint of the NR camera is based on a video stream capacity of a surveillance system that includes the NR camera (see Sririam, Paragraph [0073], “the perception system 102 may be implemented using one or more instances of a high performance platform for deep learning inference and video analytics, such as the DeepStream SDK by NVIDIA Corporation. For example, the perception system 102 may support multi-stream video parsing and decoding where each stream may be processed at least partially using a respective data processing pipeline”).
Regarding claim 5, Sririam further teaches the method of claim 4,
further comprising: identifying multiple ROIs in the NR image (see Sririam, Paragraph [0077], “The camera calibration settings may be configured such that the surfaces generated and/or determined by the sensor data processor 112 each include one or more ROIs. For example, the surface 302 may be configured to include at least the parking spots 202A, 202B, 202C, 202D, 202E, and 202F,” 202A-202F are considered to be multiple ROIs);
selecting a predetermined number of ROIs based on the video stream capacity of the surveillance system (see Sririam, Paragraph [0154], “Once camera calibration is performed, ROIs of images generated by a particular camera may also be defined to correspond to specific regions (such as, without limitation, parking spaces, parking spaces designated for particular vehicles, portions of an aisle, etc.) in the real world,” ROIs of images generated are defined to correspond to specific regions is considered to be selecting a predetermined number of ROIs based on the video stream capacity);
and generating dewarping information for each of the predetermined number of ROIs (see Sririam, Paragraph [0149], “the separation calibrator 126 may, for example, average, cluster, or otherwise statistically combine the bounding boxes into one or more groups (e.g., a group for each ROI),” bounding boxes coordinates are considered to be dewarping information).
Regarding claim 6, Sririam further teaches the method of claim 5,
wherein the dewarping information for each of the predetermined number of ROIs includes instructions for combining rectilinear outputs of the respective ROIs into a single output image (see Fig. 10A, and Sririam, Paragraph [0217], “Each of the views may be presented as 3D renderings, rather than real images or video. By consolidating multiple sensors and cameras into one view, the presented visualization allows an observer to quickly make sense of what is happening in the scene instead of watching multiple videos from different cameras (some of which may be in a difficult-to-interpret fisheye format),” consolidating multiple sensors and cameras into one view is considered to be combining rectilinear outputs into a single output image).
Regarding claim 7, Sririam further teaches the method of claim 1,
wherein the updated activity score map is generated based on detected motion in a plurality of NR images acquired over a predetermined duration (see Sririam, Paragraph [0104], “The visualization system 106 may be implemented using an asynchronous protocol for communicating data from a server to a client application (e.g., a web browser), such as web-sockets. The client may initially send a time(s) at which a state(s) of an area (e.g., the area 200) is to be displayed (e.g., a startTimestamp) and a location(s) (e.g., a level of a parking garage, a sub-area, etc.). The asynchronous protocol endpoint may send a query to the query engine 144 (e.g., at a data store) and based on the query, the communications manager 150 of the semantic analysis system 104 may continuously send updates (e.g., from the data store) to the visualization system 106 at periodic intervals (e.g., less than 10 updates per second, such as 2 updates per second) to the visualization system 106”).
Regarding claim 8, Sririam further teaches the method of claim 1,
further comprising: identifying multiple ROIs in the NR image (see Sririam, Fig. 4D, “402A, 402B, 402C, 402D, 402D”, 402A-402D are considered to be multiple ROIs);
wherein the dewarping information generated for each of the multiple ROIs includes instructions for combining rectilinear outputs of the multiple ROIs into a single output image (see Sririam, Fig. 4A, “402A, 402B, 402C, 402D, 402D”).
As per claim 10, Claim 10 claims a non-transitory computer readable medium (CRM) comprising the same limitations as Claim 1 therefore, the rejection and rationale are analogous to that made in Claim 1.
Sririam further teaches, a non-transitory computer readable medium (CRM) storing computer readable program code for processing a video stream from a non-rectilinear (NR) camera, the computer readable program code causes a computer to (see Sririam, Paragraph [0237], “For example, the memory 1104 may store computer-readable instructions (e.g., that represent a program(s) and/or a program element(s), such as an operating system. Computer-storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 1100”)
As per claim 11, Claim 11 claims the same limitation as Claim 2 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale is analogous to that made in Claim 2.
As per claim 12, Claim 12 claims the same limitation as Claim 3 and is dependent on a similarly rejected dependent claim. Therefore the rejection and rationale is analogous to that made in Claim 3.
As per claim 13, Claim 13 claims the same limitation as Claim 4 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale is analogous to that made in Claim 4.
As per claim 14, Claim 14 claims the same limitation as Claim 5 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale is analogous to that made in Claim 5.
As per claim 15, Claim 15 claims the same limitation as Claim 6 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale is analogous to that made in Claim 6.
As per claim 16, Claim 16 claims the same limitation as Claim 7 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale is analogous to that made in Claim 7.
As per claim 17, Claim 17 claims the same limitation as Claim 8 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale is analogous to that made in Claim 8.
As per claim 18, Claim 18 claims the same limitation as Claim 9 and is dependent on a similarly rejected dependent claim. Therefore the rejection and rationale is analogous to that made in Claim 9.
As per claim 19, Claim 19 claims a system for processing a video stream of a non- rectilinear (NR) camera, the system comprising: a memory; and a processor coupled to the memory, wherein the processor is configured to do the same limitations as Claim 1 therefore, the rejection and rationale are analogous to that made in Claim 1.
Sririam further teaches, a memory; and a processor coupled to the memory, wherein the processor is configured to (see Sririam, Paragraph [0239], “The CPU(s) 1106 may be configured to execute the computer-readable instructions to control one or more components of the computing device 1100 to perform one or more of the methods and/or processes described herein. The CPU(s) 1106 may each include one or more cores (e.g., one, two, four, eight, twenty-eight, seventy-two, etc.) that are capable of handling a multitude of software threads simultaneously. The CPU(s) 1106 may include any type of processor, and may include different types of processors depending on the type of computing device 1100 implemented (e.g., processors with fewer cores for mobile devices and processors with more cores for servers)”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 9 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sririam et al, US 20230016568 in view of Kojo et al, US 20170030722.
Regarding claim 9, Sririam the method according to claim 1,
further comprising: obtaining a stored activity score map that is different from the updated activity score map, wherein the stored activity score map corresponds to stored dewarping information (see Sririam, Paragraph [0214], “Various approaches described herein allow for information of an area to be collected and processed in real time, with as much granularity as desired by users both in time (last few seconds, last few minutes, last few 10-minute blocks) and in space (individual parking spots, sections, floors, or entire garage, or indeed multiple garages in the same shopping center, or in the city) to provide real time visualizations, or forensic, past time visualizations,” real time and past time visualizations are considered to be updated activity stored activity score map and obtaining a stored activity score map that is different);
computing a difference score between the stored activity score map and the updated activity score map (see Sririam, Paragraph [0230], “an image processing or computer vision method may be used to calculate the average difference between one or more regions of the image,”);
and determining whether the difference score is less than a predefined threshold (see Sririam, Paragraph [0230], “and if the average difference is below a threshold “theta” then an RTSP error may be detected.”);
Sririam does not expressively teach
when the difference score is greater than or equal to the predefined threshold, replace the stored activity score map with the updated activity score map and identify the ROI in the updated activity score map;
and when the difference score is less than the predefined threshold, use the stored dewarping information corresponding to the stored activity score map.
However, Kojo in a similar invention in the same field of endeavor teaches the method according to claim 1,
when the difference score is greater than or equal to the predefined threshold, replace the stored activity score map with the updated activity score map and identify the ROI in the updated activity score map (see Kojo, Paragraph [0037], “the controller 14 is … configured to switch from the second localization system 26 to the first localization system 24 when the first set of data is greater than (or equal to) a predetermined amount,” second localization system is considered to be the stored activity score map);
and when the difference score is less than the predefined threshold, use the stored dewarping information corresponding to the stored activity score map (see Kojo, Paragraph [0037], “the controller 14 is configured to switch from the first localization system 24 to the second localization system 26 when the first set of data is less than a predetermined amount,” first localization system is considered to be updated activity score map).
The combination of Sririam and Kojo are analogous art because they are all in the field of endeavor of smart monitoring. Therefore, it would’ve been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention switch between two localization systems greater than or lesser than a threshold as taught in the method of Kojo in the method of Sririam to accurately determine the vehicle position (see Kojo, Paragraph [0007])).
As per claim 20, Claim 20 claims the same limitation as Claim 9 and is dependent on a similarly rejected dependent claim. Therefore the rejection and rationale is analogous to that made in Claim 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.J./Examiner, Art Unit 2666
/MING Y HON/Primary Examiner, Art Unit 2666