Prosecution Insights
Last updated: April 19, 2026
Application No. 18/334,068

METHOD AND SYSTEM FOR TESTING VIDEO ANALYTICS OF A VIDEO SURVEILLANCE SYSTEM

Final Rejection §103§112
Filed
Jun 13, 2023
Examiner
SULLIVAN, TYLER
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Honeywell International Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
251 granted / 380 resolved
+8.1% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
30.3%
-9.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant amended the claims 1 – 2, 4, 6 – 10, 12 – 14, and 17 beyond formalities and 112 Rejections. The pending claims are 1 – 20 [Page 14 lines 1 – 7]. Applicant amended the Drawings and Specification to address Examiner’s Drawing Objections [Page 14 lines 8 – 22]. Applicant did not amend the Specification to address the Examiner’s Specification Objection which is thus maintained. Applicant amended the claims to address Examiner’s 112(b) Rejections [Page 14 line 23 – Page 17 line 27]. The Examiner reconsiders the 112(b) Rejections in view of the amended claims. Applicant's arguments filed October 29th, 2025 have been fully considered but they are not persuasive. This section addresses Applicant’s response to Examiner’s 112(b) Rejections. First, the Applicant argues the preambles should be afforded patentable weight and do not raise Indefinite issues [Page 14 line 23 – Page 15 line 31]. The Examiner notes while the “testing” and “facility” are repeated in the body of the claim, the Examiner maintains the Rejection since the use of “for” which may be Intended language, raises Indefinite issues regarding the Functional treatment of the claims (e.g. “step for” in method claims or functional analysis in apparatus claims). Additionally, the intended use does not provide a significant difference than what the body of the claim and features argued already provide thus is not given patentable weight [See MPEP2111.02 II “Catalina Mktg. Int’l, 289 F.3d at 808-09, 62 USPQ2d at 1785 ("[C]lear reliance on the preamble during prosecution to distinguish the claimed invention from the prior art transforms the preamble into a claim limitation because such reliance indicates use of the preamble to define, in part, the claimed invention.…Without such reliance, however, a preamble generally is not limiting when the claim body describes a structurally complete invention such that deletion of the preamble phrase does not affect the structure or steps of the claimed invention." Consequently, "preamble language merely extolling benefits or features of the claimed invention does not limit the claim scope without clear reliance on those benefits or features as patentably significant.")” and Compare Intirtool, Ltd. v. Texar Corp., 369 F.3d 1289, 1294-96, 70 USPQ2d 1780, 1783-84 (Fed. Cir. 2004) (holding that the preamble of a patent claim directed to a "hand-held punch pliers for simultaneously punching and connecting overlapping sheet metal" was not a limitation of the claim because (i) the body of the claim described a "structurally complete invention" without the preamble, and (ii) statements in prosecution history referring to "punching and connecting" function of invention did not constitute "clear reliance" on the preamble needed to make the preamble a limitation).”]. Second, the Applicant contends metes and bounds of the relative term “effectiveness” [Page 16 line 1 – Page 17 line 4]. The Examiner notes the term “effectiveness” is used 8 times in the Specification, however, with each use is not afforded any clear measure of “effectiveness” other than alarm generation. However, later in arguments [Page 20 line 28 – Page 21 line 16]. The Examiner disagrees as the subjective “effectiveness” (a relative term) while argued as a condition of triggering alarms, is also used in a context of subjective determinations of which no standard is given or frequency of success in testing is expected [MPEP2173.04 IV “A claim term that requires the exercise of subjective judgment without restriction may render the claim indefinite. In re Musgrave, 431 F.2d 882, 893, 167 USPQ 280, 289 (CCPA 1970). Claim scope cannot depend solely on the unrestrained, subjective opinion of a particular individual purported to be practicing the invention. Datamize LLC v. Plumtree Software, Inc., 417 F.3d 1342, 1350, 75 USPQ2d 1801, 1807 (Fed. Cir. 2005)); see also Interval Licensing LLC v. AOL, Inc., 766 F.3d 1364, 1373, 112 USPQ2d 1188 (Fed. Cir. 2014) (holding the claim phrase "unobtrusive manner" indefinite because the specification did not "provide a reasonably clear and exclusive definition, leaving the facially subjective claim language without an objective boundary")”]. Thus, as subjectivity is required by one of ordinary skill in the art and no such guidance in any of the 8 instances of “effectiveness” is given in the Specification, the claim term has Indefinite metes and bounds. Applicant’s arguments with respect to claim(s) 1, 14, and 17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The Examiner renumbers the points in addressing arguments directed towards Examiner’s 103 Rejection. First, the Applicant lists the references against the claims [Page 18 lines 1 – 5]. Second, the Applicant recites features of amended independent claim 1 and cites independent claims 14 and 17 were similarly amended [Page 18 lines 6 – 32] and then recites requirements of an obviousness type Rejection [Page 18 lines 33 – 37]. Third, the Applicant provides their interpretation of the amended independent claims citing Specification Paragraph 26 as support for the amendment [Page 19 lines 1 – 21]. Fourth, the Applicant contends the references address a different use-case than the one broadly claimed [Page 19 line 22 – Page 20 line 15]. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., Page 20 lines 3 – 6 (bullet points on how and why to make selections by a user) in arguing Boghossian); Page 20 lines 8 – 10 (the claims recite “based at least in part” and not “specifically” as argued [Page 20 line 8] and no “optimal testing” is claimed nor is such standard provided in the Specification); and Page 20 lines 12 – 15 (“characteristics” are claimed and not “parameters” as argued and no specific optimization is claimed or given any support or basis in the Specification) against Rze (the Examiner uses the short label used in the Office Action)) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Fifth, the Applicant argues no motivation for combining the references exist as the amended features are argued against the references as lacking [Page 20 lines 16 – 27]. Contrary to Applicant’s assertions, Bog Paragraphs 70 – 80 and 278 – 283 as well as Figures 8 – 18 were cited to address the alleged deficiencies of the claims and thus, Bog is related / analogous art and thus motivation to combine among the analogous references does exit. Further, in view of the amendments to the claims an additional reference against the claims will be cited. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, in the previous cited portions of Bog, Cary, and Rze, the Examiner found the teachings were analogous art and thus the teachings combinable to one of ordinary skill in the art. Sixth, the Applicant contends the references lacks teachings of “effectiveness” [which is still rejected as a relative term as the Specification provided no standard for how such determination is made] [Page 20 line 28 – Page 21 line 15]. However, the enumerated arguments largely are toward features not claimed although the “generic alarm” [Page 21 line 4] was argued as a measure of the claimed “effectiveness” [Page 16 lines 20 – 21] and thus the argument is unpersuasive as the word is being used with multiple meanings / connotations. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., closed-loop- methodologies; specific alarms; and comparative effectiveness (nothing claimed for such a comparison)) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Seventh, the Applicant concludes the dependent claims (except for claim 18) are allowable for the reasons given as well as amended independent claims 14 and 17 [Page 21 lines 16 – 22]. Eighth, the Applicant contends Henderson does not cure the alleged deficiencies of amended independent claim 17 (similar to claim 1) and thus claim 18 is allowable [Page 21 line 23 – Page 22 line 2]. The Examiner notes Henderson in Figure 7A and Paragraph 89 teaches menu options for user selection thus rendering obvious features of the amended independent claims. While the Applicant’s points may be understood, the Examiner respectfully disagrees and may maintain the Rejection; however, the Examiner cites an additional reference in the sole interest to expedite prosecution. Information Disclosure Statement The information disclosure statement (IDS) submitted on September 11th, 2023 and December 8th, 2024 were filed before the mailing date of the First Action on the Merits (mailed July 30th, 2025). The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner. Specification The disclosure is objected to because of the following informalities: In Paragraph 49 and throughout, the positions “X1, Y1” should read as --(x1, y1)-- to be consistent with Figure 10 notations. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the preamble generally is not afforded patentable weight and thus the claim reciting intended use language “for testing one or more video analytics of a video surveillance system” in the preamble raises issues of Indefinite metes and bounds to afford the preamble of the claim. For purposes of Examination, the intended use portion of the preamble is not being afforded patentable weight. Regarding claim 14, the preamble generally is not afforded patentable weight and thus the claim reciting intended use language “for testing one or more video analytics of a video surveillance system” in the preamble raises issues of Indefinite metes and bounds to afford the preamble of the claim. For purposes of Examination, the intended use portion of the preamble is not being afforded patentable weight. Regarding claim 17, the preamble generally is not afforded patentable weight and thus the claim reciting intended use language “for testing a video surveillance system” in the preamble raises additional issues of Indefinite metes and bounds to afford the preamble of the claim. For purposes of Examination, the intended use portion of the preamble is not being afforded patentable weight. Regarding claims 2 – 13, 15 – 16, and 18 – 20, the dependent claims do not cure the deficiencies of their respective independent claim and thus are similarly Rejected. The term “effectiveness” in claims 1, 14, and 19 is a relative (subjective) term which renders the claim indefinite. The term “effectiveness” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “desirable” in claims 1, 14, and 19 is a relative (subjective) term which renders the claim indefinite. The term “effectiveness” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 11, 13 – 17, and 19 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Boghossian, et al. (US PG PUB 2017/0372575 A1 referred to as “Bog” throughout) [Cited in Applicant’s December 8th, 2024 IDS], and further in view of Carey (US PG PUB 2021/0383132 A1 referred to as “Carey” throughout), Rzeszotarski, et al. (US PG PUB 2015/0310643 A1 referred to as “Rze” throughout), and Pastrana, et al. (WO2022/067342 A2 referred to as “Pastrana” throughout). Regarding claim 1, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches the video surveillance system including a video camera having a field of view (FOV) that encompasses part of the facility [Bog Figures 1 – 3, 8 and 17 (subfigures included and see at least reference characters 102, 804, 806, 810, 842, 844, 1740, and 1746) as well as Paragraph 19 (video camera embodiments), 101 – 103, 162 – 163, 244 – 251 (cameras within / monitoring a building / office / facility (obvious variants to one of ordinary skill in the art) with surveillance applications (e.g. Paragraph 245)), and 359 – 364 (video cameras arranged to monitor facilities / embodiments for surveillance)], the method comprising: receiving a user input that identifies a selected video analytics algorithm from a plurality of video analytics algorithms for execution on a video stream captured by the video camera [Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140) and 17 – 21 as well as Paragraphs 17 and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 270 (user generated analysis / analytic algorithms as obvious variants); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics) and 76 – 81 (user selected object and analytics to used); Pastrana Figures 7, 9, 11 and 13 (subfigures included with menus inside menus with different options in sub-menus) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 318, and 338 (menus for used with submenus / options dependent on previous selections)]; generating simulated objects that are to be superimposed on a video stream captured by the video camera, based at least in part on the selected video analytics algorithm identified by the user input [Bog Figures 17 – 19 (subfigures included and see at least reference character 1902) as well as Paragraphs 420 – 424 (superimpose virtual objects into video for test where the virtual object is an obvious variant of the claimed simulated object); Pastrana Figures 7, 9, and 11 (subfigures included) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 273 (setting orientation of objects / dependent menus for objects to place) 318, and 338 (menus for used with submenus / options dependent on previous selections)], wherein generating the simulated objects includes: determining a plurality of characteristics of the simulated objects that are desirable for testing the selected video analytics algorithm including one or more of [Bog Figures 8 – 18 (subfigures included – see at least parameters in Figures 11 – 13 and reference characters 1806 and 1812) as well as Paragraphs 70 – 80 (various parameters of objects set by user input), 397 – 412 (size of virtual object used / detected), 418 – 423 (parameters for the speed / movement of virtual / simulated objects); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used to modify Bog and Carey (Figures 2, 6, and 7 at least)) and 72 – 76 (number / user set distribution of simulated objects to include); Pastrana Figures 7, 9, and 11 (subfigures included) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 273 (setting orientation of objects / dependent menus for objects to place) 318, and 338 (menus for used with submenus / options dependent on previous selections)]: a quantity of the simulated objects in the FOV [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used further taught in Paragraph 45 (plurality of objects to be used) and 48 (various objects)) and 72 – 76 (number / user set distribution of simulated objects to include)]; a distribution of the simulated objects in the FOV [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used) and 72 – 76 (number / user set distribution of simulated objects to include)]; a starting point for each of the simulated objects in the FOV [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (see at least embodiments for point to point movement control and key point control)]; a movement of each of the simulated objects in the FOV [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious movement control as well as embodiments for point to point movement control)]; superimposing the simulated objects with the plurality of characteristics on the FOV of the video stream captured by the video camera to create an augmented video stream [Bog Figures 3 – 4, 10 – 11, 17 – 21 (subfigures included and see at least reference characters 400, 1000, 1004, 1902, and 2100) as well as Paragraphs 257 – 260 (screen view of camera feeds), 410 – 413 (virtual object on a display screen), 420 – 433 (superimpose / overlay (obvious variant – see at least Paragraph 429) virtual objects into video for test where the virtual object is an obvious variant of the claimed simulated object); Carey Figures 1, 2, and 6 (see at least reference character 715, 718, 720, and 725 with computer displays of captured video) as well as Paragraphs 85 – 90 (storing images for display to use for video analytics testing / surveillance)]; and processing the augmented video stream using the selected analytics algorithm to test the effectiveness of the selected analytics algorithm [Bog Figures 11 – 14 and 18 – 22 (subfigures included) as well as Paragraphs 416 – 425 (testing video with virtual / simulated objects overlaid (Paragraphs 426 – 433) in the video), 426 – 440 (test scenarios / analysis / results of tests combinable with Carey Figures 7 – 9 as well as Paragraphs 82 – 90 and 102 (assessments of alerts and determinations of patterns of analytic algorithms))]. The motivation to combine Carey with Bog is to combine features in the same / related field of invention of video surveillance and verification systems [Carey Paragraphs 2 and 5] in order to improve analysis / real time operation [Carey Paragraphs 3 and 7 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable]. The motivation to combine Rze with Carey and Bog is to combine features in the same / related field of invention of data visualization and tools for data objectification [Rze Paragraphs 3 – 5] in order to improve repeatability and processing large numbers of objects [Rze Paragraphs 4 and 6 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable]. The motivation to combine Pastrana with Rze, Carey, and Bog is to combine features in the same / related field of invention of graphical user interfaces to display virtual objects [Pastrana Paragraphs 2 – 4] in order to improve efficient usage of interfaces for a user to manage virtual object insertion into images / video [Pastrana Paragraphs 3 – 5 and 8 – 9 where the Examiner notes KSR Rationales (D) or (F) are also applicable]. This is the motivation to combine Bog, Carey, Rze, and Pastrana which will be sued throughout the Rejection. Regarding claim 2, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches tagging each of one or more regions of the FOV of the image with a corresponding region tag [Bog Figures 3 – 4, 7 – 13, and 17 – 20 (subfigures included and see at least reference characters 800, 1200, 1202, 1204, 1206) as well as Paragraphs 244 – 248 (user to view locations of cameras for imaging / regions covered by the camera), 257 – 265 (select region / zoom on region being imaged by a camera including labelled zones (obvious variant of the claimed tags to one of ordinary skill in the art) in combination with Paragraphs 278 – 283 (zones identified for inserting simulation / virtual objects)), and 327 – 336 (defining zones and labelling zones for further processing / testing as another obvious variant of the claimed tagging); Carey Paragraphs 67 – 77 (locations / landmarks used as obvious variant of the claimed region identification rendering obvious the claimed “tag” feature or variants in Paragraph 84 and 93 for particular locations / events)]; wherein the region tag define a region type [Bog Figures 3 – 4, 7 – 14 (see menu of options / areas), and 17 – 20 (subfigures included and see at least reference characters 800, 1200, 1202, 1204, 1206) as well as Paragraphs 244 – 248 (user to view locations of cameras for imaging / regions covered by the camera), 257 – 265 (user to select region / zoom on region being imaged by a camera including labelled zones (obvious variant of the claimed tags to one of ordinary skill in the art) in combination with Paragraphs 278 – 283 (zones identified for inserting simulation / virtual objects)), and 327 – 336 (defining zones and labelling zones for further processing / testing as another obvious variant of the claimed tagging); Carey Paragraphs 67 – 77 (locations / landmarks used as obvious variant of the claimed region identification rendering obvious the claimed “tag” feature or variants in Paragraph 84 and 93 – 98 for particular locations / events such as stores or types of objects / situations)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 3, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the region type includes one or more of a wall, a fence, an obstacle, a secure area, a window, a door, a pedestrian lane, and a vehicle lane [Bog Figures 8 and 17 – 21 as well as Paragraphs 32 – 36 (vehicles and doors / windows are regions imaged with vehicle monitoring in Paragraph 62), 101 (fence regions), 136 (train platform), 362 – 267 (pedestrian areas as region types), 395 – 402 (wall types considered in object placement), 413 – 420 (obstacle situations / crashes / people moving (Paragraph 418) or pedestrians (Paragraph 413)), 428- 433 (streets are regions on which vehicles travel); Carey Paragraphs 55 – 59 (vehicle regions and secure areas) and 92 (security personnel in a region / area rendering obvious regions / FOVs of a secure area type)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 4, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein determining the plurality of characteristics of the simulated objects includes determining where each of the one or more simulated objects are allowed to move in the FOV, where each of the one or more simulated objects are not allowed to move based on the one or more user identified regions and corresponding region tags, or both [See claims 1 and 2 for “region tag” citations and additionally Bog Figures 13 – 21 as well as Paragraphs 70 – 80, 396 – 403 (control of motion within a scene / FOV of video with limits based on regions / objects in the FOV (e.g. walls)), and 419 – 423 (point to point movement control renders obvious regions the objects move further examples in Paragraphs 428 – 433); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used), 46 – 50 (movement control of objects), and 72 – 76 (number / user set distribution of simulated objects to include)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 5, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the simulated objects include one or more of a person, an animal [See last limitation for additional citations and Carey Paragraph 55 (animals / vehicle objects)] and a vehicle [Bog Figures 13 – 21 as well as Paragraphs 70 – 80, 396 – 403, and 415 – 423 (person / vehicle objects at least suggested); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used), 45 – 50 (movement control of objects such as vehicles (Paragraphs 34 and 45)), and 72 – 76 (number / user set distribution of simulated objects to include)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 6, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the plurality of analytics algorithms [See next limitation for citations and additionally Bog Paragraphs 17 and 129 – 130 (obviousness between analysis and analytics teachings)] include one or more of a crowd detection algorithm, a crowd analytics algorithm, a people count algorithm, a behavior detection algorithm, an intrusion detection algorithm, a perimeter protection algorithm and a tailgating detection algorithm [Bog Figures 8 – 14 and 17 – 21 as well as Paragraphs 24, 36 (security analysis), 57 (intrusion detection), 63 – 64 (tailgating analysis), 251 – 255, 292 – 296 and 322 – 326 (detection area setting and analysis rendering obvious perimeter analytics further suggested in Carey Paragraph 5); Carey Paragraphs 100 – 104 (crowd parameters / size estimation (Paragraph 101) analytics determinations and other characteristics in Paragraph 88 to combine with Bog Paragraph 62)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 7, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches receiving a first user input that identifies one or more of the regions [Bog Figures 3 – 4, 7 – 13, and 17 – 20 (subfigures included and see at least reference character 800) as well as Paragraphs 244 – 248 (user to view locations of cameras for imaging / regions covered by the camera), 257 – 265 (user to select region / zoom on region being imaged by a camera including labelled zones (obvious variant of the claimed tags to one of ordinary skill in the art) in combination with Paragraphs 278 – 283 (zones identified for inserting simulation / virtual objects)), and 327 – 336 (defining zones and labelling zones for further processing / testing); Carey Paragraphs 67 – 75 (locations / landmarks used as obvious variant of the claimed region identification)]; wherein the first user input includes drawing annotations on the image while displayed on a display screen [Bog Figures 13 – 21 (subfigures included and see at least reference character 1304) as well as Paragraphs 324 – 333 (drawing tool modifying regions rendering obvious the “annotation” claimed to one of ordinary skill in the art further explained in Paragraphs 417 – 422)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 8, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches comprising receiving a second user input that tags at least one of the one or more regions with a corresponding region tag including a region tag type [The Examiner observes the claim is an obvious duplication of parts (MPEP2144.04 VI B) and additionally Bog Figures 3 – 4, 7 – 14 (see menu of options / areas), and 17 – 20 (subfigures included and see at least reference characters 800, 1200, 1202, 1204, 1206) as well as Paragraphs 244 – 248 (user to view locations of cameras for imaging / regions covered by the camera), 257 – 265 (user to select region / zoom on region being imaged by a camera including labelled zones (obvious variant of the claimed tags to one of ordinary skill in the art) in combination with Paragraphs 278 – 283 (zones identified for inserting simulation / virtual objects)), and 327 – 336 (defining zones and labelling zones for further processing / testing as another obvious variant of the claimed tagging); Carey Paragraphs 67 – 77 (locations / landmarks used as obvious variant of the claimed region identification rendering obvious the claimed “tag” feature or variants in Paragraph 84 and 93 – 98 for particular locations / events such as stores or types of objects / situations)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 9, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches comprising using video analytics to automatically identify and tag at least one of the one or more user identified regions with a corresponding region tag [The Examiner notes the feature may be regarded as obvious in view of MPEP2144.04 III (Automating a Manual Activity) and additionally Bog Figure 11 (subfigures included) as well as Paragraphs 262 – 267 (labelling zones / regions using analysis functions) and 281; Carey Paragraphs 79 – 82 and 100 – 103 (automatic region / activity in a region detection)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 10, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the plurality of characteristics of the simulated objects include one or more of [The Examiner notes in the interest of brevity not all elements in the list may have citations as they are not necessarily required by the claim within the broadest reasonable interpretation of the claim, thus amendments to the list may result in scope changes]: an object type of one or more of the simulated objects [Bog Figures 13 – 21 as well as Paragraphs 70 – 80, 396 – 403, and 415 – 423 (person / vehicle objects at least suggested); Carey Paragraph 55 (animals / vehicle objects); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used), 45 – 50 (movement control of objects such as vehicles (Paragraphs 34 and 45)), and 72 – 76 (number / user set distribution of simulated objects to include)]; a count related to a number of simulated objects [Bog Figures 11 – 14 and 19 – 21 (subfigures included) as well as Paragraphs 291 – 295 (size of objects with number considerations), 358 – 368 and 389 – 404 (real size object determinations to affect / base the virtual object size with count / number considerations with checks and user checks in Paragraphs 401 – 404); Carey Paragraphs 62 and 75 – 76 (object size considerations for simulated / virtual objects to generate)]; a distribution of two or more of the simulated objects [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used) and 72 – 76 (number / user set distribution of simulated objects to include)]; a starting point for one or more of the simulated objects [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (see at least embodiments for point to point movement control and key point control)]; a speed of movement of one or more of the simulated objects [Bog Figures 13 – 21 as well as Paragraphs 70 – 80, 285 – 290 (speed of objects determined / controlled) and 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious movement control as well as embodiments for point to point movement control); Carey Paragraphs 83 – 88 and 97 (speed of objects tracked / crowd speeds measured)]; a variation in speed of movement of one or more of the simulated objects over time [Bog Figures 13 – 21 as well as Paragraphs 70 – 80, 285 – 290 (speed of objects determined / controlled) and 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious movement control as well as embodiments for point to point movement control), 460 – 465 (variations among objects); Carey Paragraphs 83 – 88 and 97 – 103 (relative speed and parameters of objects tracked / crowd speeds measured where the speed of the crowd / movement is measured / simulated)]; a variation in speed of movement between two or more of the simulated objects [Bog Figures 4 – 8 and 17 – 21 as well as Paragraphs 285 – 290 (speed of objects determined / controlled), 395 – 405 (relative motion / speed changes between objects), 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious variation in motion / speed control as well as embodiments for point to point movement control), 460 – 465 (variations among objects); Carey Paragraphs 88 – 90 (speed / movement variations between people / crowd interactions); Rez Paragraphs 36 – 39 and 76 – 79 (relative speeds / relative motion considered)]; and a type of movement of one or more of the simulated objects [Bog Figures 4 – 8 and 17 – 21 as well as Paragraphs 285 – 290 (speed of objects determined / controlled), 395 – 405 (relative motion / speed changes between objects), 419 – 425 (speed / movement of virtual objects controlled where controlling accelerations renders obvious variation in motion / speed control as well as embodiments for point to point movement control rendering obvious a type of movement control to one of ordinary skill in the art)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 11, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the distribution of two or more of the simulated objects comprises one or more of [The Examiner notes in the interest of brevity not all elements in the list may have citations as they are not necessarily required by the claim within the broadest reasonable interpretation of the claim, thus amendments to the list may result in scope changes] a Weilbull distribution, a Gaussian distribution, a Binomial distribution, a Poisson distribution, a Uniform distribution, a Random distribution, and a Geometric distribution [Rez Paragraphs 36 – 38 and 73 – 76 (random distribution of objects for groups of objects)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 13, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches the plurality of characteristics of the simulated objects includes the type of movement of one or more of the simulated objects, wherein the type of movement includes one or more of [The Examiner notes in the interest of brevity not all elements in the list may have citations as they are not necessarily required by the claim within the broadest reasonable interpretation of the claim, thus amendments to the list may result in scope changes] a random directional movement [Rez Paragraphs 36 – 38 and 73 – 76 (random distribution of objects for groups of objects and then another / modifications to the distributions are made)], a random directional movement constrained by movement of other simulated objects, constrained by user identified regions of the FOV, a directional movement along a predefined corridor, a varying speed movement [Bog Figures 4 – 8 and 13 – 21 as well as Paragraphs 285 – 290 (speed of objects determined / controlled), 395 – 405 (relative motion / speed changes between objects), 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious variation in motion / speed control as well as embodiments for point to point movement control), 460 – 465 (variations among objects); Carey Paragraphs 88 – 90 (speed / movement variations between people / crowd interactions); Rez Paragraphs 36 – 39 and 76 – 79 (relative speeds / relative motion considered)], a group movement associated with two or more of the simulated objects, or any combination thereof [Bog Figures 13 – 21 as well as Paragraphs 70 – 80, 285 – 290 (speed of objects determined / controlled) and 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious movement control as well as embodiments for point to point movement control), 460 – 465 (variations among objects); Carey Paragraphs 83 – 88 and 97 – 103 (relative speed and parameters of objects tracked / crowd speeds measured where the speed of the crowd / movement is measured / simulated)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 14, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches the video surveillance system including a video camera having a field of view (FOV) that encompasses part of the facility [Bog Figures 1 – 3, 8 and 17 (subfigures included and see at least reference characters 102, 804, 806, 810, 842, 844, 1740, and 1746) as well as Paragraph 19 (video camera embodiments), 101 – 103, 162 – 163, 244 – 251 (cameras within / monitoring a building / office / facility (obvious variants to one of ordinary skill in the art) with surveillance applications (e.g. Paragraph 245)), and 359 – 364 (video cameras arranged to monitor facilities / embodiments for surveillance)], the method comprising: generating simulated objects that are to be superimposed on a video stream captured by the video camera, based at least in part on the selected video analytics algorithm identified by the user input [Bog Figures 17 – 19 (subfigures included and see at least reference character 1902) as well as Paragraphs 420 – 424 (superimpose virtual objects into video for test where the virtual object is an obvious variant of the claimed simulated object); Pastrana Figures 7, 9, and 11 (subfigures included) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 273 (setting orientation of objects / dependent menus for objects to place) 318, and 338 (menus for used with submenus / options dependent on previous selections)], wherein generating the simulated objects includes: receiving a user input that identifies a selected video analytics algorithm from a plurality of video analytics algorithms for execution on a video stream captured by the video camera [Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140) and 17 – 21 as well as Paragraphs 24, 36 (security analysis), 57 (intrusion detection), 63 – 64 (tailgating analysis), 17 and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 270 (user generated analysis / analytic algorithms as obvious variants), and 251 – 255, 292 – 296 and 322 – 326 (detection area setting and analysis rendering obvious perimeter analytics further suggested in Carey Paragraph 5); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics), 76 – 81 (user selected object and analytics to used), and 100 – 104 (crowd parameters / size estimation (Paragraph 101) analytics determinations and other characteristics in Paragraph 88 to combine with Bog Paragraph 62); Pastrana Figures 7, 9, 11 and 13 (subfigures included with menus inside menus with different options in sub-menus) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 318, and 338 (menus for used with submenus / options dependent on previous selections)]; based at least in part on the particular video analytics algorithm identified by the user input, determining a plurality of characteristics of the simulated objects that are desirable for testing the selected video analytics algorithm including one or more of [Bog Figures 8 – 18 (subfigures included – see at least parameters in Figures 11 – 13 and reference characters 1806 and 1812) as well as Paragraphs 24, 36 (security analysis), 57 (intrusion detection), 63 – 64 (tailgating analysis), 70 – 80 (various parameters of objects set by user input), 397 – 412 (size of virtual object used / detected), 418 – 423 (parameters for the speed / movement of virtual / simulated objects); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used to modify Bog and Carey (Figures 2, 6, and 7 at least)) and 72 – 76 (number / user set distribution of simulated objects to include); Pastrana Figures 7, 9, and 11 (subfigures included) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 273 (setting orientation of objects / dependent menus for objects to place) 318, and 338 (menus for used with submenus / options dependent on previous selections)]: a quantity of the simulated objects in the FOV [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used further taught in Paragraph 45 (plurality of objects to be used) and 48 (various objects)) and 72 – 76 (number / user set distribution of simulated objects to include)]; a distribution of the simulated objects in the FOV [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used) and 72 – 76 (number / user set distribution of simulated objects to include)]; a starting point for each of the simulated objects in the FOV [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (see at least embodiments for point to point movement control and key point control)]; a movement of each of the simulated objects in the FOV [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious movement control as well as embodiments for point to point movement control)]; superimposing the simulated objects on the FOV of the video stream captured by the video camera to create an augmented video stream [Bog Figures 17 – 21 (subfigures included and see at least reference character 1902) as well as Paragraphs 420 – 433 (superimpose / overlay (obvious variant – see at least Paragraph 429) virtual objects into video for test where the virtual object is an obvious variant of the claimed simulated object); Carey Figures 1, 2, and 6 (see at least reference character 715, 718, 720, and 725 with computer displays of captured video) as well as Paragraphs 85 – 90 (storing images for display to use for video analytics testing / surveillance)]; and processing the augmented video stream using the particular video analytics algorithm identified by the user input to test the effectiveness of the particular video analytics algorithm when run on the video stream captured by the video camera [Bog Figures 3 – 4, 10 – 14 and 17 – 22 (subfigures included and see at least reference characters 400, 1000, 1004, 1902, and 2100) as well as Paragraphs 257 – 260 (screen view of camera feeds), 410 – 433 (testing video with virtual / simulated objects overlaid (Paragraphs 426 – 433) in the video), 426 – 440 (test scenarios / analysis / results of tests combinable with Carey Figures 7 – 9 as well as Paragraphs 82 – 90 and 102 (assessments of alerts and determinations of patterns of analytic algorithms))]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana as the methods are similar in scope. Regarding claim 15, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the particular video analytics algorithm comprises one of an intrusion detecting algorithm, a crowd detection algorithm, a loitering detection algorithm, an unauthorized entry detection algorithm, a tailgating detection algorithm, and a behavior detection algorithm to detect one or more predetermined behaviors [Bog Figures 8 – 14 and 17 – 21 as well as Paragraphs 24, 36 (security analysis), 57 (intrusion detection), 63 – 64 (tailgating analysis), 251 – 255, 292 – 296 and 322 – 326 (detection area setting and analysis rendering obvious perimeter analytics further suggested in Carey Paragraph 5); Carey Paragraphs 76 – 81 (crowd behavior analytics) and 100 – 104 (crowd parameters / size estimation (Paragraph 101) analytics determinations and other characteristics in Paragraph 88 to combine with Bog Paragraph 62)]. See claim 14 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 16, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches wherein the user input includes user input selecting one or more video analytics algorithms from the plurality of video analytics algorithms for use in conjunction with the video stream of the video camera [Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140) and 17 – 21 as well as Paragraphs 17 and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 270 (user generated analysis / analytic algorithms as obvious variants); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics) and 76 – 81 (user selected object and analytics to used)]. See claim 14 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 17, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches a video camera having a field of view (FOV) that encompasses a region of a facility [Bog Figures 1 – 3, 8 and 17 (subfigures included and see at least reference characters 102, 804, 806, 810, 842, 844, 1740, and 1746) as well as Paragraph 19 (video camera embodiments), 101 – 103, 162 – 163, 244 – 251 (cameras within / monitoring a building / office / facility (obvious variants to one of ordinary skill in the art) with surveillance applications (e.g. Paragraph 245)), and 359 – 364 (video cameras arranged to monitor facilities / embodiments for surveillance)], the method comprising: generating simulated objects that are to be superimposed on a video stream captured by the video camera [Bog Figures 17 – 19 (subfigures included and see at least reference character 1902) as well as Paragraphs 420 – 424 (superimpose virtual objects into video for test where the virtual object is an obvious variant of the claimed simulated object)], wherein generating the simulated objects includes: receiving from a user a voice [Rze Paragraphs 102 – 103 (user voice input – to combine with next limitation)] or text based description of one or more scenarios to be tested [Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140 as text based descriptions) and 17 – 21 as well as Paragraphs 17 and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 275 (user generated analysis / analytic algorithms as obvious variants and text input such as in Paragraphs 274 – 275); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics) and 76 – 81 (user selected object and analytics to used rendering obvious the scenarios (due to the type of analytics tested) claimed to one of ordinary skill in the art)]; identifying one or more video analytics algorithms based at least in part on the voice or text based description of the one or more scenarios to be tested [See previous limitation for citations and additionally Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140) and 17 – 21 as well as Paragraphs 24, 36 (security analysis), 57 (intrusion detection), 63 – 64 (tailgating analysis), 17 and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 270 (user generated analysis / analytic algorithms as obvious variants), and 251 – 255, 292 – 296 and 322 – 326 (detection area setting and analysis rendering obvious perimeter analytics further suggested in Carey Paragraph 5); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics), 76 – 81 (user selected object and analytics to used), and 100 – 104 (crowd parameters / size estimation (Paragraph 101) analytics determinations and other characteristics in Paragraph 88 to combine with Bog Paragraph 62); Pastrana Figures 7, 9, 11 and 13 (subfigures included with menus inside menus with different options in sub-menus) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 318, and 338 (menus for used with submenus / options dependent on previous selections)]; based at least in part on the voice or text based description of the one or more scenarios to be tested, and the identified one or more video analytics algorithms, determining a plurality of characteristics of the simulated objects that are desirable for testing the selected video analytics algorithm including one or more of [See previous limitation for citations and additionally Bog Figures 8 – 18 (subfigures included – see at least parameters in Figures 11 – 13 and reference characters 1806 and 1812) as well as Paragraphs 24, 36 (security analysis), 57 (intrusion detection), 63 – 64 (tailgating analysis), 70 – 80 (various parameters of objects set by user input), 397 – 412 (size of virtual object used / detected), 418 – 423 (parameters for the speed / movement of virtual / simulated objects); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used to modify Bog and Carey (Figures 2, 6, and 7 at least)) and 72 – 76 (number / user set distribution of simulated objects to include); Pastrana Figures 7, 9, and 11 (subfigures included) as well as Paragraphs 228 – 231, 237 – 240 (multiple menus to select options to place / locate objects – to combine with the options of Bog), 273 (setting orientation of objects / dependent menus for objects to place) 318, and 338 (menus for used with submenus / options dependent on previous selections)]: a quantity of the simulated objects in the FOV [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used further taught in Paragraph 45 (plurality of objects to be used) and 48 (various objects)) and 72 – 76 (number / user set distribution of simulated objects to include)]; a distribution of the simulated objects in the FOV [Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used) and 72 – 76 (number / user set distribution of simulated objects to include)]; a starting point for each of the simulated objects in the FOV [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (see at least embodiments for point to point movement control and key point control)]; a movement of each of the simulated objects in the FOV [Bog Figures 13 – 21 as well as Paragraphs 70 – 80 and 419 – 423 (speed / movement of virtual objects controlled where controlling accelerations renders obvious movement control as well as embodiments for point to point movement control)]; superimposing the simulated objects on the FOV of the video stream captured by the video camera to create an augmented video stream [Bog Figures 17 – 21 (subfigures included and see at least reference character 1902) as well as Paragraphs 420 – 433 (superimpose / overlay (obvious variant – see at least Paragraph 429) virtual objects into video for test where the virtual object is an obvious variant of the claimed simulated object); Carey Figures 1, 2, and 6 (see at least reference character 715, 718, 720, and 725 with computer displays of captured video) as well as Paragraphs 85 – 90 (storing images for display to use for video analytics testing / surveillance)]; and processing the augmented video stream using the identified one or more video analytics algorithms to test the one or more scenarios described in the voice or text based description [See “receiving from a user …” limitation for citations and additionally Bog Figures 3 – 4, 10 – 14 and 17 – 22 (subfigures included and see at least reference characters 400, 1000, 1004, 1902, and 2100) as well as Paragraphs 257 – 260 (screen view of camera feeds), 410 – 433 (testing video with virtual / simulated objects overlaid (Paragraphs 426 – 433) in the video), 426 – 440 (test scenarios / analysis / results of tests combinable with Carey Figures 7 – 9 as well as Paragraphs 82 – 90 and 102 (assessments of alerts and determinations of patterns of analytic algorithms))]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana as the methods are similar in scope. Regarding claim 19, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches identifying one or more video analytics algorithms based at least in part on the description of the one or more scenarios [Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140) and 17 – 21 as well as Paragraphs 17, 62, and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 270 (user generated analysis / analytic algorithms as obvious variants); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics) and 76 – 81 (user selected object and analytics to used based on scenario to analyze) and 100 – 104 (crowd parameters / size estimation (Paragraph 101) analytics determinations and other characteristics in Paragraph 88 to combine with at least Bog Paragraph 62)]; and processing the augmented video stream to test the effectiveness of each of the identified one or more video analytics algorithms [Bog Figures 11 – 14 and 18 – 22 (subfigures included) as well as Paragraphs 416 – 425 (testing video with virtual / simulated objects overlaid (Paragraphs 426 – 433) in the video), 426 – 440 (test scenarios / analysis / results of tests combinable with Carey Figures 7 – 9 as well as Paragraphs 82 – 90 and 102 (assessments of alerts and determinations of patterns of analytic algorithms))]. See claim 17 for the motivation to combine Bog, Carey, Rze, and Pastrana. Regarding claim 20, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana. The combination teaches determining one or more of the plurality of characteristics of the simulated objects based at least in part on the identified one or more video analytics algorithms [Bog Figures 8 – 18 (subfigures included – see at least parameters in Figures 11 – 13 and reference characters 1806 and 1812) as well as Paragraphs 70 – 80 (various parameters of objects set by user input), 397 – 412 (size of virtual object used / detected), 418 – 423 (parameters for the speed / movement of virtual / simulated objects); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used to modify Bog and Carey (Figures 2, 6, and 7 at least and Paragraphs 76 – 81 and 85 – 88 (analytics determine values to test for and characteristics to analyze)) and 72 – 76 (number / user set distribution of simulated objects to include)]. See claim 17 for the motivation to combine Bog, Carey, Rze, and Pastrana. Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Bog, Carey, Rze, and Pastrana as applied to claim 17 above, and further in view of Henderson (WO2024/186350 A1 referred to as “Henderson” throughout) [First cited in the Office Action mailed July 30th, 2025]. Regarding claim 18, Bog teaches a system to place virtual objects in video streams to test analytics algorithms. Carey teaches various video analytics algorithms and considerations n placing objects to use to modify Bog. Taylor teaches the use of virtual tests for video analytic algorithms using virtual / simulated object in various situations / configurations. Rze teaches simulation parameters to control / set for the creation of simulated / virtual objects to insert into video analysis / testing applications. Pastrana teaches the use of menus for options to place virtual objects in which the menus are combinable with the other references to display tests available. Henderson teaches the use of natural language processing to processing events / objects to incorporate into testing processes. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the system of Bog with the considerations in adding objects and use of analytic algorithms as taught by Carey and to incorporate distribution and other parameters of simulated / virtual objects as taught by Rze with selection methods and menus / options as taught by Pastrana and natural language processing of user inputs as taught by Henderson. The combination teaches utilizing natural language processing to extract keywords from the voice or text based description of the one or more scenarios to be tested [Rze Paragraphs 102 – 103 (user voice input); Henderson Paragraphs 26 – 27 and 86 (NLP (natural language processing of user inputs – to combine with Bog and Carey prompts for language to parse)]; mapping the keywords to one or more of the plurality of characteristics of the simulated objects [Henderson Paragraphs 26 – 27 and 86 (NLP (natural language processing of user inputs – to combine with Bog and Carey prompts for language to parse); Bog Figures 8 – 14 (subfigures included as well as see at least reference characters 1124, 1126, 1128, 1136, 1138, and 1140 as text based descriptions) and 17 – 21 as well as Paragraphs 17 and 129 – 130 (obviousness between analysis and analytics teachings), 257 – 275 (user generated analysis / analytic algorithms as obvious variants and text input such as in Paragraphs 274 – 275); Carey Paragraphs 5 (user input to select analytics), 56 – 59 (user interface to select analytics), 76 – 81 (user selected object and analytics to used rendering obvious the scenarios (due to the type of analytics tested) claimed to one of ordinary skill in the art), and 100 – 104 (characteristics / desired analytics)]; and determining one or more of the plurality of characteristics of the simulated objects based at least in part on the mapping [See previous limitations for citations and additionally Bog Figures 8 – 18 (subfigures included – see at least parameters in Figures 11 – 13 and reference characters 1806 and 1812) as well as Paragraphs 70 – 80 (various parameters of objects set by user input), 397 – 412 (size of virtual object used / detected), 418 – 423 (parameters for the speed / movement of virtual / simulated objects); Rze Figures 1 – 3 and 8 (subfigures included) as well as Paragraphs 36 – 38 (number and distribution of objects used to modify Bog and Carey (Figures 2, 6, and 7 at least)) and 72 – 76 (number / user set distribution of simulated objects to include)]. See claim 1 for the motivation to combine Bog, Carey, Rze, and Pastrana. The motivation to combine Henderson with Pastrana, Rze, Carey, and Bog is to combine features in the same / related field of invention of digital item/ object processing [Henderson Paragraphs 1 – 3] in order to improve content translation from language / other input sources [Henderson Paragraphs 22 – 23 where the Examiner observes at least KSR Rationale (F) is also applicable]. Allowable Subject Matter Claim 12 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim when considered as a whole requires at least two different distribution options for simulated objects from a closed list that is not fairly taught in the prior art and would be allowable if fully incorporated into independent claim 1 including intervening claim 10 as merely placing objects to one distribution is non-obvious in view of the cited prior art. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cella, et al. (US PG PUB 2023/0281527 A1 referred to as “Cella” throughout) teaches natural language processing of user inputs and in Figures 190 – 200 as testing video analytics algorithms. Rathod (US PG PUB 2018/0350144 A1 referred to as “Rathod” throughout) teaches menus and different options in menus for virtual objects to display in Figures 6 – 8, 27 – 30, and 59 (obvious arrangements). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler W Sullivan whose telephone number is (571)270-5684. The examiner can normally be reached IFP. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER W. SULLIVAN/ Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Jun 13, 2023
Application Filed
Jul 26, 2025
Non-Final Rejection — §103, §112
Oct 29, 2025
Response Filed
Feb 13, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594884
TRAILER ALIGNMENT DETECTION FOR DOCK AUTOMATION USING VISION SYSTEM AND DYNAMIC DEPTH FILTERING
2y 5m to grant Granted Apr 07, 2026
Patent 12593027
INTRA PREDICTION FOR SQUARE AND NON-SQUARE BLOCKS IN VIDEO COMPRESSION
2y 5m to grant Granted Mar 31, 2026
Patent 12563211
VIDEO DATA ENCODING AND DECODING USING A CODED PICTURE BUFFER WHOSE SIZE IS DEFINED BY PARAMETER DATA
2y 5m to grant Granted Feb 24, 2026
Patent 12542894
Method, An Apparatus and a Computer Program Product for Implementing Gradual Decoding Refresh
2y 5m to grant Granted Feb 03, 2026
Patent 12541880
CAMERA CALIBRATION METHOD, AND STEREO CAMERA DEVICE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
98%
With Interview (+31.6%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month