DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are currently pending in U.S. Patent Application No. 18/599,466 and an Office action on the merits follows.
Specification
In paragraph 18, “in the video streams ,processing” should read “in the video streams, processing”.
In paragraph 52, “in association with video source.” should read “in association with the video source.”
Claim Interpretation
The claims will be read under the broadest reasonable interpretation standard outlined in
MPEP § 2111.01.
The examiner interprets “each of object of the objects” as recited by claims 1, 10, and 19 to be a grammatical typo. The claims will be assumed to read “each object of the objects” instead.
The examiner interprets “wherein the distortion updates at least traffic paths in the physical area for movement” as recited by claims 3 and 12 to include the consequential and inherent effects the presence of distortion would have on predicted traffic paths.
The examiner interprets the term distortion to generally refer to any differences between a video’s display and the ground truth.
The examiner interprets a “tag” to include a mark of an object in a computer system. See Cambridge Dictionary, Cambridge University Press, https://dictionary.cambridge.org/dictionary/english/tag: “tag noun (COMPUTING): a way of marking part of a computer document in order to identify it or to show how it should be treated when the document is printed or shown on a screen”.
Claim Objections
Claims 1, 10, and 19 are objected to because of the following informalities: the phrase “each of object of the objects” reads as a grammatical typo.
Claims 5 and 14 are objected to because of the following informalities: the phrase “the second user comprises a tag” should read “the second user input comprises a tag”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3 and 12 are rejected under U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 3 and 12 recite the limitation “wherein the distortion updates at least traffic paths in the physical area for movement.” It is unclear whether it is the mere presence of distortion which consequentially and necessarily updates traffic paths, or if an output of determined distortion by the device/method results in a compensating adjustment to the traffic path.
Additionally, the claims state on their face that updates to the traffic paths take place in physical space, which contradicts the nature of the disclosed invention as set forth in the specification and claims. For the purpose of compact prosecution, the examiner interprets the limitation as referring to a digital model of traffic paths.
Furthermore, the claim recites “in the physical area for movement”. It is unclear if this “movement” is referring back to “the movement” recited in claim 1. Clarification/explanation is respectfully requested.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because they are directed to ineligible patent subject matter. The claims are directed to the Abstract Idea groupings of mathematical
calculations under MPEP § 2106.04(a)(2)(I) and mental processes under MPEP §
2106.04(a)(2)(III). These are judicial exceptions under Step 2A, Prong One of the framework
outlined in the cases of Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 216, 110 USPQ2d 1976, 1980 (2014) and Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012). See MPEP § 2106(III).
PNG
media_image1.png
200
400
media_image1.png
Greyscale
Step 1: The claims in question are directed to a method and device for “video surveillance based on distortion identified in video streams.” Machines and processes are both statutory categories. See MPEP 2106.03(I), “A machine is a "concrete thing, consisting of parts, or of certain devices and combination of devices." Digitech, 758 F.3d at 1348-49, 111 USPQ2d at 1719 (quoting Burr v. Duryee, 68 U.S. 531, 570, 17 L. Ed. 650, 657 (1863)). This category "includes every mechanical device or combination of mechanical powers and devices to perform some function and produce a certain effect or result." Nuijten, 500 F.3d at 1355, 84 USPQ2d at 1501 (quoting Corning v. Burden, 56 U.S. 252, 267, 14 L. Ed. 683, 690 (1854))”; See MPEP 2106.03(I), “NTP, Inc. v. Research in Motion, Ltd., 418 F.3d 1282, 1316, 75 USPQ2d 1763, 1791 (Fed. Cir. 2005) ("[A] process is a series of acts.") (quoting Minton v. Natl. Ass’n. of Securities Dealers, 336 F.3d 1373, 1378, 67 USPQ2d 1614, 1681 (Fed. Cir. 2003)). As defined in 35 U.S.C. 100(b), the term "process" is synonymous with "method."” (Step 1: Yes).
Step 2A, Prong One: As explained in MPEP 2106.04(II), a claim “recites” a judicial
exception when the judicial exception is “set forth” or “described” in the claim. Here, each
claim recites or depends upon the mathematical functions of distortion processing, frequency calculation, and trend prediction (Claim 1, “determining distortion in the video stream”, “identifying one or more movement trends for the one or more additional objects in the physical area based on the movement, the user input, and the distortion”; Claim 2, “indicate a frequency”, Claim 3, “wherein the distortion updates at least traffic paths in the physical area for movement”).
The claims also recite varying mental processes, including identification, monitoring, determination, user input, location suggestion, and cartography (“Claim 1, “identifying a video stream from a video source”, “obtaining user input identifying objects in the video stream and the map, wherein the user input for each of object of the objects correlates a tag of said object in the video stream to a tag of a representation of said object in the map”, “determining distortion in the video stream based on the user input”, “monitoring movement of one or more additional objects in the video stream”, “identifying one or more movement trends”; Claim 2, “wherein the one or more movement trends comprise a heatmap indicating a frequency”; Claim 4, “The method of claim 1 further comprising: obtaining second user input for a new object in the map, wherein the second user input identifies a representation of the new object in the map; and generating a location suggestion for a tag of the new object in the video stream based on the distortion, the user input, and the second user input”).
The claims are recited at a high level of generality and lack any specifics precluding such
an analysis from being interpreted under the mental processes grouping of “practically performed
in the mind” (see also MPEP § 2106.04(a)(2) identifying how e.g. a use of pen and paper, a ruler,
or a computer as a tool (to assist in visually/mentally analyzing/observing acquired
images/video) fails to preclude such an interpretation under the mental processes judicial
exception). Activities such as “comprise a heatmap”, “comprises an overhead map” or “[generate] a location suggestion” therefore may be performed mentally, even if they may require an additional tool for data collection and display. Similarly, basic user interface and data processing do not elevate these claims past a mental process.
Regarding artificial intelligence, to the extent it is implicated, the claims of “identify one or more movement trends for the one or more additional objects in the physical area based on the movement, the user input, and the distortion” are comparable to Claim 2 of Example 47 of the July 2024 PEG regarding subject matter eligibility (https://www.uspto.gov/sites/default/files/documents/2024-AISMEUpdateExamples47-49.pdf). As stated therein, an artificial intelligence’s analyses, detections, and reinforcement learnings may be practically performed in the human mind. To the extent mathematical calculations are required to operate and train the artificial intelligence in image analysis, the separate judicial exception is also implicated.
As such, the usage of a computer to identify, determine and monitor movement trends from a video stream does not elevate these claims beyond a mental process. (Step 2A, Prong One: Yes).
Step 2A, Prong Two: If Prong One of Step 2A is met, the examiner must consider (1)
whether there are any ‘additional elements’ recited in the claim beyond the judicial exception,
and (2) evaluate those additional elements individually and in combination to determine whether
the claim as a whole integrates the exception into a practical application. See MPEP §
2106.04(d).
Limitations the courts have found indicative of integration include: an improvement in
the functioning of a computer, or an improvement to other technology or technical field, as
discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a); applying or using a judicial exception to
effect a particular treatment or prophylaxis for a disease or medical condition, as discussed in
MPEP § 2106.04(d)(2); implementing a judicial exception with, or using a judicial exception in
conjunction with, a particular machine or manufacture that is integral to the claim, as discussed
in MPEP § 2106.05(b); effecting a transformation or reduction of a particular article to a
different state or thing, as discussed in MPEP § 2106.05(c); and applying or using the judicial
exception in some other meaningful way beyond generally linking the use of the judicial
exception to a particular technological environment, such that the claim as a whole is more than
a drafting effort designed to monopolize the exception, as discussed in MPEP § 2106.05(e).
Limitations that the courts have found non-indicative of integration include: merely
reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including
instructions to implement an abstract idea on a computer, or merely using a computer as a tool to
perform an abstract idea, as discussed in MPEP § 2106.05(f); adding insignificant extra -solution
activity to the judicial exception, as discussed in MPEP § 2106.05(g); and generally linking the
use of a judicial exception to a particular technological environment or field of use, as discussed
in MPEP § 2106.05(h).
As an additional note, ‘additional elements’ are generally limitations excluded from
interpretation under the Abstract Idea groupings, and may comprise portions of limitations
otherwise identified as falling under those Abstract Idea groupings of the 2019 PEG (e.g. any
‘determination’ that may be made mentally by a user, neural network and/or generic computer hardware is considered under the ‘apply it’ considerations of 2106.05(f)). Any ‘providing’/outputting broadly, and ‘collection/input’ of data (i.e output display of processed data via user interface, and basic gathering of video input be they videos for training, any learning model and/or data/images visually observable/ evaluated by a user/operator, also fail(s) to integrate at least in view of MPEP 2106.05(g) (extra -solution data gathering/output) and/or 2106.05(h) as ‘generally linking’ the exception to a field of use involving machine learning and/or imagery so acquired (e.g. the use of sensors or cameras for acquiring said video broadly). The same determination holds for dependent claims that serve to limit the collection/output of data/videos (by means of what is collected based on recited conditions) and/or introduce limitations generally linking to a field of use.
None of the instant claims appear to explicitly/clearly capture/recite any disclosed
improvement in technology (see MPEP 2106.05(a), with note that ‘functioning of a computer’
concerns functions integral to the way a computer operates and not ‘functions’ that a generic
computer can be programmed/adapted to perform (see also 2106.05(f))) and any ‘additional
elements’, even when considered in combination, fail to integrate at Prong Two of Step 2A
accordingly. Integration in view of subsection (a) requires an identification of the manner in
which the improvement is achieved, to be explicitly and specifically recited in the claims, as
‘additional elements’ precluded from interpretation under any of the Abstract Idea groupings
(since the improvement cannot be to the exception itself). With reference to MPEP 2106.05(a):
It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981))
As applicable here, additional limitations not directed to a judicial exception fail to
integrate at Prong Two of Step 2A. Claim 1 recites a “video stream from a video source”, and a “tag of said object in the video stream to a tag of a representation of said object in the map”; Claims 10 and 19 recite a “a storage system; a processing system operatively coupled to the storage system; and program instructions stored on the storage system”. The incorporation of basic video surveillance technology (video capture, video labeling, computers) does little more than generally link the judicial exceptions of mental processes and mathematical calculations to a field-of-use and technological environment. See MPEP § 2106.05(h).
Claim 2 recites “a heatmap indicating a frequency”; Claim 5 recites “generating a location suggestion for a tag”; Claim 6 recites “generating a display of at least one trend of the one or more trends on the map”. These limitations constitute insignificant extra-solution activity under MPEP § 2106.05(g). Specifically, the limitation amounts to no more than necessary data outputting, under rationale 3 of MPEP § 2106.05(g).
Even when viewed in combination, any additional elements present do not integrate the
recited judicial exception into a practical application (Step 2A, Prong Two: No), and the claims
are directed to the judicial exception. (Revised Step 2A: Yes → Step 2B).
Step 2B: If Prong Two of Step 2A is not met, the examiner must consider whether the
claim as a whole amounts to ‘significantly more’ than the recited exception, i.e., whether any
‘additional element’, or combination of additional elements, adds an inventive concept to the
claim. The considerations of Step 2A Prong 2 and Step 2B overlap, but differ in that 2B also
requires considering whether the claims feature any “specific limitation(s) other than what is
well-understood, routine, conventional activity in the field” (WURC) (MPEP § 2106.05(d)).
Such a limitation if specifically recited however, must still be excluded from interpretation under
any of the Abstract Idea groupings. Step 2B further requires a re -evaluation of any additional
elements drawn to extra-solution activity in Step 2A (e.g. gathering video, rendering output) –
however no limitations appear directed to any novel collection or output generation per se.
Limitations not indicative of an inventive concept/‘significantly more’ include those that are not
specifically recited (instead recited at a high level of generality), those that are established as
WURC (a plurality of cited references serve to evidence the WURC nature of ‘analysis’ based at
least in part on corroborating/ additional ground data), and/or those that are not ‘additional
elements’ by nature of their analysis at Prong One of Step 2A (i.e. directed to the exception – see
above re. deciding that a second acquisition may be advantageous/desired). The July 2024 PEG
describes that an improvement/ inventive concept (for ‘significantly more’ determination(s))
cannot be to the judicial exception itself.
The claim(s) in question recite little beyond those limitations recited at a high level
of generality and falling under e.g. the mental processes Abstract Idea grouping, and would
monopolize the exception accordingly. The additional limitations of video capture, object tagging, and output display as recited are WURC, as evidenced by the body of prior art
cited by the examiner in this office action. (Step 2B: No).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 5, 9, 10, 14, and 18 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Margarian et. al (US 20230060211 A1) (Hereinafter, “Margarian”).
As to claim 1, Margarian discloses “identifying a video stream from a video source” (Margarian, Fig. 4, Margarian, Paragraph 49 “In the case when all video cameras of the system contain an object tracker, the video data and metadata received in real time can be immediately transmitted to the data processing device”), “identifying a map, wherein at least a portion of the physical area represented by the map is monitored by the video source” (Margarian, Paragraph 59, “Namely, the user of the system puts the camera icon at a certain point on the map (this point corresponds to the coordinates of the real location of the camera). After that, the data processing device assigns a specific location to a specific video camera.”), “obtaining user input identifying objects in the video stream and the map, wherein the user input for each of object of the objects correlates a tag of said object in the video stream to a tag of a representation of said object in the map” (Margarian, Paragraphs 59-66, “During calibration, at least four virtual segments are determined on the map and the frame of the image under consideration, characterizing the coordinates of the location of a stationary object in space. The connections between them are set, while one end of each segment corresponds to the location of a stationary object in the frame and the other end of the segment corresponds to the location of the object on the terrain map. For calibrated cameras, matching of the object image position with their position on the terrain map is marked.”), “determining distortion in the video stream based on the user input” (Margarian, Paragraph 59-66), “monitoring movement of one or more additional objects in the video stream” (Margarian, Fig. 2, 4), and “identifying one or more movement trends for the one or more additional objects in the physical area based on the movement, the user input, and the distortion” (Margarian, Fig. 2 (discussing movement component); Margarian, Paragraph 105 (discussing user input component); Margarian, Paragraphs 59-66 (discussing distortion component)):
PNG
media_image2.png
2289
1557
media_image2.png
Greyscale
PNG
media_image3.png
2381
1624
media_image3.png
Greyscale
As to claim 5, Margarian discloses obtaining second user input for a new object in the video stream, wherein the second user comprises a tag of the new object in the video stream; and generating a location suggestion for a tag of a representation of the new object in the map based on the distortion, the user input, and the second user input. See Margarian, Paragraph 105 (discussing second user inputs resulting in tags); Margarian, Paragraph 49 (discussing existence of objects in the video stream); Margarian, Paragraph 75 (discussing how new objects may enter the system), Margarian, Paragraph 64-67 (discussing object representation on a map, “For calibrated cameras, matching of the object image position with their position on the terrain map is marked”; “Thus, in the context of the described embodiment of the invention, all the video cameras of the system are linked to the said interactive map, while the video cameras are calibrated and contain an object tracker”); Margarian, Paragraph 105 (discussing how objects may be tagged in the video stream); Margarian, Paragraphs 49-50, 64, 75 (discussing location suggestion system based on distortion and user input(s)).
As to claim 9, Margarian discloses “identifying a second video stream from a second video source; and wherein identifying one or more movement trends for the one or more additional objects in the physical area is further based on the second video stream” (Margarian, Fig. 2):
PNG
media_image2.png
2289
1557
media_image2.png
Greyscale
As to claim 10, Margarian discloses a processing system coupled to the storage system, and program instructions stored on the storage system that, when executed, direct the computing apparatus (Margarian, Paragraph 38; Margarian, Paragraphs 53-54; Margarian, Paragraph 111). The remaining elements are identical in scope to claim 1, and have merely been presented in the form of a computing device. As such, claim 10 is rejected in accordance with the rejection of claim 1 discussed above.
As to claim 14, Margarian discloses the elements of claim 10 upon which claim 14 depends, as discussed above. The remaining elements are identical in scope to what was disclosed in claim 5, subject to the dependency on claim 10’s additional limitations. Accordingly, the claim is rejected in line with the rejections of claims 10 and 5 discussed above.
As to claim 18, Margarian discloses the elements of claim 10 upon which claim 18 depends, as discussed above. The remaining elements are identical in scope to what was disclosed in claim 9, subject to the dependency on claim 10’s additional limitations. Accordingly, the claim is rejected in line with the rejections of claims 10 and 9 discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Bendtson (US 20210337133 A1) (Hereinafter “Bendtson”).
Margarian teaches the elements of claim 1 by which claim 2 depends, as discussed in the section above. Margarian does not teach a heatmap indicating a frequency that the one or more additional objects are in different locations of the physical area or routes of the one or more additional objects traversing the physical area. However, Bendtson teaches the same (Bendtson, Paragraph 3).
Bendtson is analogous to the claimed invention as it is in the field of video surveillance technology. A person of ordinary skill in the art would understand a heatmap as a known technique that is readily transferrable from one video surveillance system onto another. See MPEP § 2143(I)(D), Examples 1-3. A heatmap is simply one conventional means of displaying the information collected by video surveillance cameras, and is accessory to the base device in such a way that its underlying function is not compromised. As stated in Bendtson, Paragraph 3, “Heatmaps are a useful tool for analysing video surveillance data over time.”
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide a convenient means for displaying the movement trends of the video surveillance system, as of the effective filing date of the claimed invention.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Iwai (US 10235574 B2) (Hereinafter “Iwai”).
Margarian teaches the elements of claim 1 by which claim 3 depends, as discussed in the section above. Margarian does not teach “wherein the distortion updates at least traffic paths in the physical area for movement”. However, Iwai provides for the same. Iwai provides for a system wherein distortions may be corrected when video input is deemed inappropriate for further analysis, which serves to update the heatmap trends represented: (Iwai, Col. 6, line 64 – Col. 7, line 14; Iwai, Col. 24, 42-46; Iwai, Fig. 1)
PNG
media_image4.png
2795
2027
media_image4.png
Greyscale
Iwai is analogous to the claimed invention as it is in the field of video surveillance. A person of ordinary skill in the art would understand the advantages of continuous distortion correction in the context of video surveillance data analysis. As stated in Iwai, Col. 2, lines 10 – 16 “If the activity map is output with low accuracy, a user may perform erroneous determination, and the user may make an unnecessary effort. Thus, the usability of the user may be deteriorated, and thus, a technology capable of improving the usability of the user by previously preventing such inconvenience is needed”. Within the broad language of the recited claim, the basic concept of repeated distortion correction activity is readily integrated into the base method of Margarian (which already provides for distortion correction) with the predictable result of further validating its surveillance analysis system.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide a continuous system of distortion correction and updated modeling, as of the effective filing date of the claimed invention.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Kim et. al (KR 20190119229 A) (Hereinafter “Kim”).
Margarian teaches the elements of claim 1 by which claim 4 depends, as discussed in the section above. Margarian further teaches a second user input for a new object (Margarian, Paragraph 105; Margarian, Paragraph 75 (discussing how “new” objects may enter the system generally)), object representation on a map (Margarian, Paragraph 64-66 “For calibrated cameras, matching of the object image position with their position on the terrain map is marked”), as well as the generation of a location suggestion for a tag of a new object in the video stream based on distortion and user input (Margarian, Paragraph 49-50; Margarian, Paragraph 75; Margarian, Paragraph 64). Margarian does not teach “second user input for a new object in the map, wherein the second user input identifies a representation of the new object in the map”. However, Kim teaches the missing concept – inputs identifying a representation of an object on a map (Kim, Paragraph 99, “When the filtering unit 250 receives a user input including the attribute of the object of interest, the filtering unit 250 filters a cluster corresponding to the attribute of the object of interest based on the attribute of the event object and the attribute of the object of interest. Here, the object of interest represents an object to be monitored by the user. Here, the user input including the attribute of the object of interest includes information directly indicating the attribute of the object of interest or information indirectly indicating the attribute of the object of interest (e.g., a shape image of the object of interest, a map on which the movement of the object of interest is displayed).”); Kim, Paragraph 148 “In another embodiment, there may be a plurality of user inputs. For example, a first user input is associated with a first attribute, and a second user input is associated with a second attribute. In this case, when there is a user input, the attribute of the event object may be extracted in response to the user input.”).
Kim is analogous to the claimed invention because it is in the field of video surveillance. Margarian does not explicitly disclose selections of a new object in the map (as opposed to another user interface), but one of ordinary skill in the art would understand the combination of Margarian and Kim to be the application of a known technique to a known method. Margarian already discloses the linking of objects to a map via calibrated video surveillance cameras, as well as the inclusion of new objects, and inputs by the user for identification of objects. Kim teaches the specific identification of new objects via map representation. This specific input means would have the predictable result of providing a specific way to achieve the desired outcome of Margarian – identification of new objects in the video surveillance system. Integration would further result in an improved system with no compromised functioning. See MPEP § 2143(I)(D), Examples 1-3.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide an additional means for the user to select new objects for tracking, as of the effective filing date of the claimed invention.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Monroe (US 20030025599 A1) (Hereinafter, “Monroe”).
Margarian teaches the elements of claim 1 by which claim 6 depends, as discussed in the section above. Margarian does not explicitly teach “generating a display of at least one trend of the one or more trends on the map.” However, Monroe teaches the same (Monroe, Paragraph 266 “Moreover, the map display may be overlaid with vectors, showing the intruder's movements schematically through the building.”).
Monroe is analogous to the claimed invention, as it is in the field of video surveillance. Margarian readily accommodates the need to display trends onto a user interface (Margarian, Paragraph 52 “The graphical user interface (GUI) is a system of data input and output tools for user interaction with a computing device based on the representation of all system objects and functions available to the user in the form of graphical components of the screen (windows, icons, menus, buttons, lists, etc.). Thus, the user has random access via data input/output devices to all visible screen objects —interface units — which are displayed on the display. The data input/output device can be, but is not limited to, mouse, keyboard, touchpad, stylus, joystick, trackpad, etc.”). Monroe constitutes the application of a known technique to the known method of Margarian – map display of movement trends (in Monroe’s case, a vector showing speed and direction). One of ordinary skill in the art would understand Monroe as an explicit means by which to apply the GUI concept of Margarian, with the predictable result of showing the user trends in association with the larger map.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide one way to display the movement trends to the user, as of the effective filing date of the claimed invention.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Morita et. al (US 20210326596 A1) (Hereinafter “Morita”).
Margarian teaches the elements of claim 1 by which claim 7 depends, as discussed in the section above. Margarian further teaches “one or more objects compris[ing] people, vehicles” (Margarian, Paragraph 92). Margarian does not explicitly teach robot objects. However, Morita teaches a video surveillance system that is capable of monitoring a robot (Morita, Paragraph 202).
Morita is analogous to the claimed invention, as it is in the field of video surveillance. Margarian includes a wide genus of objects that may be tracked within its disclosure (Margarian, Paragraph 92 “It should also be noted that a neural network can be learned to calculate feature vectors for any type of objects, such as, but not limited to: human, vehicle, animal, item, thing, etc.”). A robot can readily fall under the categories of vehicle, item, or things disclosed in Margarian. One or ordinary skill in the art would understand that a robot could also be monitored by conventional video surveillance systems, as taught in Morita and suggested in Margarian. A person of ordinary skill in the art would be motivated to include compatibility with as many objects as possible, to increase the functionality of the system.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide additional objects that could be tracked by the system, as of the effective filing date of the claimed invention.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Ataev et. al (US 11651667 B2) (Hereinafter “Ataev”).
Margarian teaches the elements of claim 1 by which claim 8 depends, as discussed in the section above. Margarian does not explicitly teach the usage of an overhead map. However, Ataev (of the same assignee as Margarian) teaches the explicit usage of the same, in a system that broadly resembles that of Margarian (Ataev, Fig. 2).
Ataev is analogous to the claimed invention, as it is in the field of video surveillance. Margarian does not explicitly disclose an overhead map, but refers to broader map categories nonetheless (Margarian, Paragraphs 65-66). One or ordinary skill in the art would understand that an overhead map is a specific type of map. Ataev evidences the ready integration of an overhead map into conventional surveillance systems. A person of ordinary skill in the art would be motivated to include an overhead map, for the bird’s eye field-of-view of the overall system.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide a specific mapping means, as of the effective filing date of the claimed invention.
As to claims 11-13 and 15-17, they depend upon claim 10 which was rejected over Margarian, discussed in the section above. They otherwise recite the same limitations as claims 2-4 and 6-8 respectively. One of ordinary skill in the art would understand a computing device to be readily integrative of video surveillance systems, as disclosed by Margarian. Thus, a person of ordinary skill in the art would be motivated to combine these references in the same manner as discussed above, as of the effective filing date of the claimed invention, with the addition that any methods are executed via the computing apparatus of Margarian for increased efficiency.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Kim.
Claim 19 is almost identical to claim 13, with an exception that the phrase “identify one or more movement trends for the one or more additional objects in the physical area based on the movement, the user input, the second user input, the location, and the distortion” is used in place of “identify one or more movement trends for the one or more additional objects in the physical area based on the movement, the user input, and the distortion”. A second user input is also taught by Margarian (Margarian, Paragraph 105), which is further utilized in the identification of movement trends (Margarian, Fig. 2).
The remaining elements of the secondary references may be combined in the manner described in the rejection of claims 13 and 4 above. One or ordinary skill in the art would not be dissuaded from combining Margarian and Kim in the previously outlined manner because of the additional distinction presented in claim 19. Kim also teaches multiple inputs for identifying objects (Kim, Paragraph 148). As evidenced in both references, this is a known technique that would yield the predictable result of identifying multiple objects in the system – thus resulting in an improved system.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide for additional secondary input functionality, as of the effective filing date of the claimed invention.
The examiner also notes that the limitations of “identify a video stream from a video source” and “identify a map” are not present, as compared to claim 10. However, the application of U.S.C. 103 is unaffected.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Margarian in view of Kim and Bendtson.
For the elements of claim 19 upon which claim 20 depends, see the above rejection of claim 19. Margarian and Kim do not explicitly teach a heatmap. However, Bendtson teaches the same (Bendtson, Paragraph 3).
The addition of Kim does not otherwise affect the motivation to combine Bendtson and Margarian as outlined in the rejection of claims 11 and 2. Neither Kim nor Margarian are affected in their functionality through the inclusion of an additional heatmap, nor is the principle of operation changed. See MPEP §§ 2143.01(V), 2143.01(VI). Margarian and Kim analyze movement trends of objects, and Bendtson is merely an additional trend that may be monitored as disclosed. Kim encourages including a means of frequency summary (Kim, Paragraph 156 “In addition, a video summary may be generated for an object of interest having a movement in a specific direction, and thus may be used to analyze a specific situation (when an accident frequently occurs in a specific direction at an intersection).”). One of ordinary skill in the art would recognize that applying the known technique of a heatmap would have the predictable result of displaying object position frequency, thus improving visualization of the overall system.
Thus, a person of ordinary skill in the art would be motivated to combine these references to provide for additional heatmap functionality, as of the effective filing date of the claimed invention.
Additional References
Additionally cited references (see attached PTO-892) otherwise not relied upon above have been made of record in view of the manner in which they evidence the general state of the art.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOAH WILLIAM BOYAR whose telephone number is (571)272-8392. The examiner can normally be reached 8:30 – 5:00 EST, Monday – Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NOAH W BOYAR/Examiner, Art Unit 2669
/CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669