Prosecution Insights
Last updated: April 19, 2026
Application No. 18/119,605

AUTOMATIC DETERMINATION AND MONITORING OF VEHICLES ON A RACETRACK WITH CORRESPONDING IMAGERY DATA FOR BROADCAST

Non-Final OA §103
Filed
Mar 09, 2023
Examiner
ITSKOVICH, MIKHAIL
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Sportsmedia Technology Corporation
OA Round
9 (Non-Final)
35%
Grant Probability
At Risk
9-10
OA Rounds
4y 0m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
206 granted / 585 resolved
-22.8% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
62 currently pending
Career history
647
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/06/2025 has been entered. Response to Arguments Applicant's arguments filed on 08/06/2025 have been fully considered but they are not persuasive. Generally, Examiner suggests directing the clam amendments to features that solve particular problems in the art when cooperating with the rest of the claim elements. A multitude of features that stand alone do not provide a strong basis for non-obviousness. Applicant argues: “Solely to expedite examination, Applicant has amended claim 14 to recite, in part, "wherein the rules are weighted to specific events of vehicles on the racetrack" and "wherein the application of the rules includes adding a numerical value to the quantified event score upon a determination of a celebrity status of the at least one vehicle driver." None of the cited prior art teaches these limitations. Thus, Applicant respectfully submits that the rejection is overcome and requests it to be withdrawn.” Examiner notes the updated reasons for rejection below. Applicant argues: “While McCoy may teach tracking objects via sensors or camera, Applicant respectfully submits that is not the same thing as deriving a quantified event score based on the application of rules to imagery data as in the present application. The event score of the present application ( also referred to as an "interestingness score" in the specification as filed) "includes a score or any other quantified data for determining the importance, relevancy, interestingness, noteworthiness, etc. of events and/or objects captured using one or more video cameras and/or through telemetry." Para. [0054]. … Additionally, the rules are weighted to specific events of vehicles on a racetrack, such that certain events add or subtract from the event score. As described in paragraph [0058] …” Examiner notes that the claimed “quantified event score based on the application of rules” is extremely vague language encompassing most digital processing of image data and not limited to the examples in the Specification. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Since Applicant does not claim all the details cited in the Specification, it is an indication that Applicant does not intend to limit the claims to such details. See updated reasons for rejection below. Regarding the newly amended language, Applicant argues: “Moreover, the Office Action states, on page 14, that "wherein the rules include determining a celebrity status of at least one vehicle driver" is interpreted as metadata that identifies a tracked object, in which McCoy teaches metadata associated with an object to be tracked. Applicant respectfully submits that the present application utilizes the determination of a celebrity status to add a numerical value to the quantified event score, which is not the same as mere metadata associated with an object.” Examiner notes that the newly amended language is addressed in the updated reasons for rejection below. Possession of a particular data content (celebrity status metadata) does not appear to patentably distinguish the methods of processing data content (meta data). Response to Amendment Examiner withdraws the rejection of Claims 1-20 under 35 U.S.C. 112(a) in view of the amendments. Claim Construction This paragraph resolves the level of ordinary skill in the pertinent art. Present claims are in the field of video processing and computer vision, and they reference the following without elaborating on the underlying structures or methodology: “based on the imagery data, determining vehicle dynamics including real-world positions, real-world velocities, camera image positions, and camera image velocities of both the at least one vehicle and at least one other object … determining an imminent vehicle entry into a field of view of one of the plurality of video cameras and based on the vehicle dynamics determining a direction and a point of entry of the at least one vehicle into the field of view of the one of the plurality of video cameras”. Thus, in order to comply with the enablement and definiteness requirements under 35 US.C. 112, a person of ordinary skill in this art is expected to know the structures and methodology underlying these and related elements as well as equivalents, variants, and applications of these elements in the field. “When a patent claims a structure already known in the prior art that is altered by the mere substitution of one element for another known in the field, the combination must do more than yield a predictable result.” KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 415, 82 USPQ2d 1385 (2007). Note that “duplication of parts has no patentable significance unless a new and unexpected result is produced” In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960). Broadly providing an automatic or mechanical means to replace a manual activity which accomplished the same result is not sufficient to distinguish over the prior art. In re Venner, 262 F.2d 91, 95, 120 USPQ 193, 194 (CCPA 1958); M.P.E.P 2144.04(III); FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016). A claimed improvement by use of a computer requires a nexus to a particularly claimed algorithm and may not come “solely from the capabilities of a general-purpose computer.” FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This paragraph describes the treatment of admitted prior art. In describing an invention, Applicant must inevitably reference that which is known in the art as the basis for the invention, however it is important that the claims particularly point out and distinctly claim that which Applicant regards to be his own invention. See 35 U.S.C. 112 (b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. A statement by an applicant in the specification or made during prosecution identifying prior art is an admission which can be relied upon for both anticipation and obviousness determinations, regardless of whether the admitted prior art would otherwise qualify as prior art under the statutory categories of 35 U.S.C. 102. Riverwood Int ’l Corp. v. R.A. Jones & Co., 324 F.3d 1346, 1354, 66 USPQ2d 1331, 1337 (Fed. Cir. 2003); Constant v. Advanced Micro-Devices Inc., 848 F.2d 1560, 1570, 7 USPQ2d 1057, 1063 (Fed. Cir. 1988). The examiner must determine whether the subject matter identified as prior art is applicant’s own work, or the work of another. In the absence of another credible explanation, examiners should treat such subject matter as the work of another. MPEP 2129. Claims 1-12, 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20150116501 to McCoy (“McCoy”) in view of US 20030095186 to Aman (“Aman”) in view of Applicant admitted prior art (“AAPA”) described in the Specification, and in view of US 10311064 to Pollak (“Pollak”). Regarding Claim 14: “A method for automatically tracking and analyzing imagery data of at least one vehicle on a racetrack comprising: a video event management system including a processor, a memory, and a database (“one or more processors, such as a processor 202, a memory 204” McCoy, Paragraphs 39-43 including networked computers as in Figs. 1-4. Also, “The processor 202 may store the data received from the cameras 104 and the sensors 106 in the memory 204,” embodying a database. McCoy, Paragraph 47. Also note embodiments of the database in Aman, original Claims 1-10: “a database representative of each object's locations.” Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of McCoy to store or retrieve object data from a database as taught in Aman, in order to organize the related data stored in memory.) receiving video imagery from a plurality of video cameras positioned around the racetrack; (“The processor 202 may store the data received from the cameras 104 and the sensors 106 in the memory 204.” McCoy, Paragraph 47. “In an embodiment, the cameras 104 may be installed in such a way that a position of each of the cameras 104 is fixed. … the cameras 104 may be installed at various locations surrounding a playground. … soccer field … ” McCoy, Paragraphs 13-16, 20, 97. While McCoy does not explicitly teach use of the system on a racetrack field, however McCoy teaches the substantively identical systems and methods for substantively identical use (automated optical object tracking) on a playground and a soccer field as well as security applications with particular fields to be tracked. See additional treatment over Glier below. Cumulatively, Aman teaches a substantively similar embodiment in the context of multi-camera imaging and object tracking during sporting events: “All of the cameras that are used to view the marker and therefore object movement are pre-placed in fixed strategic locations designed to keep the entire tracking volume in view of two or more cameras” Aman, Paragraph 13. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of McCoy to have “a combined field of view of a whole” of the tracked field or security location as taught in Aman, in order to “keep the entire tracking volume in view.” Aman, Paragraph 13. selecting imagery data of the at least one vehicle from the video imagery; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, the detected “at least one vehicle” is an example of a tracked object on the field, and the system for tracking objects in a video is substantively identical whether the object is a “vehicle” or it is “other object.” Prior art teaches this: “The objects 102 may correspond to any living and/or non-living thing that may be tracked. The objects 102 may correspond to people, animals, articles (such as a ball used in a sport event), an item of inventory, a vehicle, and/or a physical location. … to simultaneously track two or more objects” McCoy, Paragraphs 17, 34.) based on the imagery data, determining vehicle dynamics; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, determining vehicle dynamics can include tracking real-world locations, real-world velocities, camera image positions, and camera image velocities related to the tracked object. Prior art teaches this: “The objects 102 may correspond to any living and/or non-living thing that may be tracked. The objects 102 may correspond to people, animals, articles (such as a ball used in a sport event), an item of inventory, a vehicle, and/or a physical location. … to simultaneously track two or more objects … may determine the current location of the first object l02a and the second object 102b to be tracked.” McCoy, Paragraphs 17, 34.) enhancing the imagery data of the at least one vehicle using a signal processing facility; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, enhancing the imagery data can be in producing a digital image or in enhancing an imaging quality such as contrast. See Specification, Paragraph 67. Prior art teaches the first embodiment: “tracking objects using a digital camera.” McCoy, Paragraph 1. Also images can be enhanced by processing: “In an embodiment, the processor 202 may crop the high resolution signal based on a position of an object … The controlling device 108 may track an object based on the cropped portion.” In McCoy, Paragraph 85. Or the processor can enhance the resolution of the image before image processing as in McCoy, Paragraph 19. These embodiments correspond to determining “kept portions” of Specification, Paragraph 67.) producing event-representing mathematical models (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, “that event representing mathematical models can be produced if desired based on the determined positions, paths and/or other states of the specifically tracked objects” Specification, Paragraph 67. Prior art teaches an example, to “apply three-dimensional (3D) processing to the output of the non-visible cameras to determine the locations and distances of an object to be tracked.” McCoy, Paragraph 71. See similar features in Aman, Paragraphs 100, 176, and statement of motivation below.) and/or statistical information based on the enhanced imagery data of the at least one vehicle; (See rejection of the first option above. Cumulatively: Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, “Statistical information 674 regarding each tracked object can be also be produced … based on for example, average and/or peak and/or minimum speeds, average directions and/or angles, distance traveled by each tracked object, height of each tracked object, and so forth.” Specification, Paragraph 67. Prior art teaches an example, to “apply three-dimensional (3D) processing to the output of the non-visible cameras to determine the locations and distances of an object to be tracked.” McCoy, Paragraph 71. Also note “The processor 202 may store the determined current location, activities performed, and/or the direction and/or the distance of the first object 102a relative to the cameras 104 in the memory 204.” McCoy, Paragraph 51. See similar features in Aman, Paragraphs 100, 176, and statement of motivation below.) deriving a quantified event score for the at least one vehicle by application of rules to the imagery data (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, a “quantified event score” can be a digital number or identifier for one of (or a combination of) the listed events based on images and object locations; for example each added number can be a binary bit for representing the presence or absence of an activity. McCoy teaches such embodiments: “For example, based on received GPS signals [telemetry], the processor 202 may determine whether a tracked person is moving up or down a staircase. The processor 202 may store the determined current location [telemetry], activities performed [i.e. a particular motion], and/or the direction and/or the distance of the first object 102a relative to the cameras 104 in the memory 204.” McCoy, Paragraph 51. Also see embodiments in Pollak, Column 10, lines 1-9 as discussed below.) wherein the rules are weighted to specific events of vehicles on the racetrack, (The claim does not specify whether weighting to specific events means considering specific events and not others, or whether a particular mathematical operations is applied using weighted values, or another weight construction. McCoy teaches the first example, “For example, based on received GPS signals [telemetry], the processor 202 may determine whether a tracked person is moving up or down a staircase,” where the rules are weighted to determine events in the video of a person moving up or down a staircase. See McCoy, Paragraph 51 and application of computer vision to vehicles on the racetrack in view of Aman above. Cumulatively, an example of the second embodiment is taught in Pollak, Column 10, lines 1-9. See statement of motivation below.) wherein the rules include determining a celebrity status of at least one vehicle driver, … upon a determination of the celebrity status of the at least one vehicle driver; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, celebrity status of a vehicle driver is an example of metadata that identifies a tracked object McCoy teaches that “the metadata associated with an object to be tracked may include but are not limited to, a name … a unique identifier … and/or any other information capable of identifying … that identifies the person to be tracked.” McCoy, Paragraph 48, 32. The information may be precise identification or a recognizable characteristic such as “the color of a dress worn by a person.” Thus, McCoy provides data fields that track the object, the identity of a person associated with the object, and descriptive characteristics of a person, of which a celebrity status is an example. See similarly in Pollak, Column 9, lines 17-28. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to determine and use metadata that stores distinguishing characteristics of a person, such as a celebrity status and/or any other information capable of identifying a person in a video, in order to track that person in the media content or to provide searchable features in the media content. See McCoy, Paragraph 48, 32 and Pollak, Column 9, lines 17-28.) generating at least one floating subframe within a full frame of a video camera encompassing the at least one vehicle or vehicle path, wherein the video camera is selected from the plurality of video cameras; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, this element embodies a process where a camera [frame] is selected based on a scored criteria indicating a presence of an object in the field of view and then cropping the field of view to the portion that contains the object. McCoy teaches this embodiment: “the processor 202 may select the first camera 104a such that the first object 102a lies in the field of view of the first camera 104a. … two or more cameras may satisfy the pre-determined criteria …” McCoy, Paragraphs 56-57. McCoy further processes the image to “crop a portion of a high resolution signal that includes an object to be tracked.” McCoy, Paragraph 85. Similarly, Aman teaches subframes as directed and zoomed camera views in Paragraph 176. See statement of motivation above.) adjusting the subframe coordinates based on the vehicle dynamics; (“to determine a direction and a distance of the first object l02a [vehicle dynamics] relative to the cameras 104. In an embodiment, the processor 202 may determine the direction and the distance of the first object 102a relative to the cameras 104 based on a triangulation method,” in McCoy, Paragraph 51.) linking the at least one subframe to the video imagery and the stills of the video imagery; and (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, the linking can be embodied in storing the determined metadata, and selected camera images/video in memory. Prior art teaches this embodiment: “The memory 204 may further store one or more [subframe] images and/or video content captured by the cameras 104 … The processor 202 may store the determined current location, activities performed, and/or the direction and/or the distance of the first object 102a relative to the cameras 104 in the memory 204. … store information associated with one or more tracked objects in the memory 204. Examples of such information may include, but are not limited to, a time at which the one or more objects are seen in an image captured by the cameras 104 and a duration for which the one or more objects are seen in an image captured by the cameras 104.” McCoy, Paragraphs 43, 51, 83. Applicant may wish to elaborate on the steps embodied in linking.) storing video imagery data for a specific subframe based on the event score;” (“The controlling device may crop [store a subframe and discard unwanted data in] an image captured by the selected first set of cameras based on a relative position of the plurality of objects with the image [event identification]. … crop an image and/or video signal …” McCoy, Paragraphs 14-15 and 84-85. Also note the embodiment in “The memory 204 may further store one or more images and/or video content captured by the cameras 104” thus discarding images and video data not stored. McCoy, Paragraph 43.) determining an [imminent] vehicle entry into a field of view of a second video camera of the plurality of video cameras (“the processor 202 may be operable to switch between multiple cameras based on the change in location of the first object 102a. For example, when the location of the first object 102a changes, the first object 102a may move out of the field of view of the selected first camera 104a. In such a case, the processor 202 may select the second camera 104b.” McCoy, Paragraphs 59-60 and similarly in Aman, Paragraph 176. See statement of motivation below.) based on pre-established mappings, wherein the pre-established mappings are used to determine [ahead of time] when the vehicle is coming into view for the second camera; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, the plurality of “pre-established mappings” can refer to “in camera-scene points that are found within the two-dimensional Sx by Sy image capture frames” of each camera, which are differentiated from a singular world map. See Specification, Paragraph 91. Prior art refers to such mappings as fields of view, and tracks which objects are within each camera’s field of view [mapping]: “The controlling device may select a second set of cameras from the plurality of cameras for tracking one or more objects of the plurality of objects when the one or more objects move out of a field of view of one or more cameras of the selected first set of cameras. … A location of the plurality of objects relative to the plurality of cameras may be determined based on the received one or more signals.” McCoy, Paragraph 13. See similar embodiments of camera mappings that can further be combined into an area tracking matrix in Aman, Paragraph 253, Figs. 14b and 25, and statement of motivation below.) generating at least one second floating subframe … [in advance] of the current position of the at least one vehicle based on the imminent vehicle entry and the pre-established mappings; (“When the first object 102a again moves closer [advancing toward] to the first camera 104a [the first camera mapping], the processor 202 may switch again to the first camera 104a to track the first object 102a,” thus, switching in advance of entering the first camera mapping. McCoy, Paragraphs 59-60. Similarly see predictive switching based on field of view matrix in Aman, Paragraph 253. See statement of motivation below.) second floating subframe operable to overlap the at least one floating subframe (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, an overlapping floating subframe embodies an overlapping field of view. See Specification, Fig. 3. Prior art teaches this embodiment: “the plurality of smaller resolution cameras may be set up such that the field of view of the plurality of smaller resolution cameras may overlap” McCoy, Paragraphs 19-20. See similarly “an overlapping configuration of tracking cameras” in Aman, Paragraph 12.) automatically applying one or more metatags to each subframe at least based on the relative position determined (“The memory 204 may further be operable to store data associated with the objects 102 to be tracked. Examples of such data associated with the objects 102 may include, but are not limited to, metadata associated with the objects 102, locations of the objects 102, preference associated with the objects 102, and/or any other information associated with the objects 102. … The memory 204 may further store one or more images and/or video content captured by the cameras 104,“ indicating that metadata is associated with the subframes that capture the video of the objects. The processor 202 may store the data received from the cameras 104 and the sensors 106 in the memory 204,” which include camera and thus subfame location / coordinates. McCoy, Paragraphs 41-42.) position determined by at least one near field wireless transceiver placed on the at least one vehicle (“The sensors 106 may be operable to determine a location of the objects 102 … an RFID tag coupled to the clothes of person may transmit radio frequency (RF) signals to the controlling device 108.” McCoy, Paragraphs 25-26.) storing the one or more metatags in a knowledge database; (“The memory 204 may further be operable to store data associated with the objects 102 to be tracked. Examples of such data associated with the objects 102 may include, but are not limited to, metadata associated with the objects 102, locations of the objects 102, preference associated with the objects 102, and/or any other information associated with the objects 102.“ McCoy, Paragraphs 41-42.) periodically determining a keepsake worthiness score for the at least one floating subframe and the at least one second floating subframe, … based on the one or more metatags applied to the at least one floating subframe and the at least one second floating subframe; and (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, a keepsake worthiness score can be the measure of speed or proximity to other objects. Prior art teaches: “the player is identified [metatags] along with pertinent information including orientation, direction of movement, velocity, and acceleration as well as current relative (X, Y) location.” McCoy, Paragraph 26. Also, “may determine the current location of the first object l02a and the second object 102b to be tracked.” McCoy, Paragraphs 17, 34. See treatment of metadata in video subframes above.) automatically discarding footage of the at least one floating subframe and the at least one second floating subframe with the keepsake worthiness score below a preset threshold.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, discarding may be accomplished by various means, such as not selecting the footage for use, cropping unnecessary footage, or storing only selected footage based on a threshold that determines the presence or a particular quality of the claimed metadata. Prior art teaches such embodiments: “The controlling device may crop [discard footage data in] an image captured by the selected first set of cameras based on a relative position of the plurality of objects with the image [example event score]. … crop an image and/or video signal …” McCoy, Paragraphs 14-15 and 84-85. In another embodiment, “The controlling device may select a first set of cameras from the plurality of cameras to track the plurality of objects based on the received metadata,” thus selecting footage of metadata that meets the worthiness threshold and not selecting (i.e. discarding) footage that does no. Also note the embodiment in “The memory 204 may further store one or more images and/or video content captured by the cameras 104” thus discarding one or more images and video content that is not stored. McCoy, Paragraph 43. Therefore, before the effective filing date of the claimed invention, it would have been known or obvious to one of ordinary skill in the art that image or video footage can be discarded if it did not have objects of interest in it, as determined based on measurements extracted from the video and from metadata related to the objects or the video footage, such as speed or proximity of objects. McCoy does not explicitly teach: ”determining an imminent vehicle entry into a field of view … determine ahead of time when the vehicle is coming into view for the second camera … generating at least one second floating subframe in advance” As noted above, McCoy tracks actual entry and imminent exits from the field of view of each camera, so it has the tools to track imminent object movement. And, Aman teaches the above feature in the context of multi-camera imaging and object tracking during sporting events: “This [camera] reassignment decision can be based upon the information gathered by the scalable area tracking matrix [combined mapping of camera fields of view] 504m, predictive calculations made by computer 160 concerning the expected next positions of any and all players, or both.” See Aman, Paragraph 253. This teaching corresponds substantively to the embodiments supported in the Specification, Paragraph 91. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of McCoy to have “determine an imminent vehicle entry into a field of view, determine ahead of time when the vehicle is coming into view for the second camera, generate at least one second floating subframe in advance” as taught in Aman, in order to “will automatically reassign cameras … so as to ensure total maximum player visibility.” Aman, Paragraph 253. Finally, in reviewing the present application, there does not seem to be objective evidence that the Claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Specification Paragraphs 48-49 indicate that the keepsake worthiness is determined and evaluated as it would have been by a human, and the claims are not limited to more than a general automation of this human activity based on information of interest when viewing an auto race. logically linking footage data to a corresponding segment portion of a motion modeling curve (Note that this feature including the following three elements appears to be an extra solution that is not used by the other elements of the claim. Further, Prior art teaches this as: “the tracking system creates a mathematical model of the tracked players and equipment as opposed to a visual representation that is essentially similar to a traditional filming and broadcast system. A mathematical model allows for the measurement of the athletic competition while also providing the basis for a graphical rendering of the captured movement for visual inspection … creating a mathematical model of human movement … video frame analysis,” Aman, Paragraphs 100-101. Thus the particular segments of the motion models are logically linked to the particular video frames from which the motion model segments are determined.) and positive or negative outcomes; (The claim is not specific as to what defines positive and negative outcomes. Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, these can be positive or negative outcomes of detecting a particular object or event. See Specification, Paragraph 99. Prior art teaches an example as: “The processor 202 may store the determined current location, activities performed, and/or the direction and/or the distance of the first object 102a relative to the cameras 104 in the memory 204.” McCoy, Paragraphs 51, 55-56. See similar features in Aman, Paragraphs 100, 176, and statement of motivation above. wherein the motion modeling curve of a venue event varies per the venue event; (“It is preferable that the tracking system creates a mathematical model of the tracked players and equipment” and thus the model varies for each venue event because the players and equipment vary. Aman, Paragraphs 100, 176, and statement of motivation above.) automatically storing the footage data and the positive or negative outcomes in the knowledge database; (“The processor 202 may store the determined current location, activities performed, and/or the direction and/or the distance of the first object 102a relative to the cameras 104 in the memory 204.” McCoy, Paragraphs 51.) searching the knowledge database for the footage data based on a user query for objects of potential interest versus the positive or negative outcomes; (For example, “In an embodiment, the processor 202 may select the first camera 104a such that the first camera 104a satisfies one or more pre-determined criteria,” which describes a search based on the criteria discussed above. McCoy, Paragraphs 56-57. Similarly, “The system employs predictive techniques based upon the object's last known position, acceleration, velocity, and direction of travel to minimize the search time required to locate the object in subsequent video frames.” Aman, Paragraphs 54, 176, and statement of motivation above.) McCoy and Aman do not teach “wherein the application of the rules includes adding a numerical value to the quantified event score.“ Pollak teaches this feature in the context of managing media content: “the first priority score and the second priority score are summed to calculate a total priority score” Pollak, Column 10, lines 1-9. Pollak also notes that there can be additional priority metrics and additional weighted scores. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of McCoy and Aman to use rules that perform adding a numerical value to the quantified event score as taught in Pollak, in order to determine if a “score is indicative of the probability that the user is interested in the content item”. Pollak, Column 10, lines 1-9. Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Regarding Claim 15: “The method of claim 14, wherein the vehicle dynamics include real-world positions, real-world velocities, camera image positions, and/or camera image velocities for the at least one vehicle.” “The objects 102 may correspond to any living and/or non-living thing that may be tracked. The objects 102 may correspond to people, animals, articles (such as a ball used in a sport event), an item of inventory, a vehicle, and/or a physical location. … to simultaneously track two or more objects … may determine the current location of the first object l02a and the second object 102b to be tracked.” McCoy, Paragraphs 17, 34. See tracking camera positions in McCoy, Paragraph 26. Also note “the player is identified along with pertinent information including orientation, direction of movement, velocity, and acceleration as well as current relative (X, Y) location.” McCoy, Paragraph 26. See statement of motivation in Claim 1.) Regarding Claim 16: “The method of claim 14, wherein the plurality of video cameras are fixed with stationary fields of view.” (“In an embodiment, the cameras 104 may be installed in such a way that a position of each of the cameras 104 is fixed. … the cameras 104 may be installed at various locations surrounding a playground. … soccer field … ” McCoy, Paragraphs 13-16, 20, 97.) Regarding Claim 17: “The method of claim 14, wherein the imagery data of at least one vehicle is selected based on telemetry data of the at least one vehicle.” (“In an example, the processor 202 may select the first camera 104a such that the first camera 104a is closest to the location of the first object 102a. In another example … in the field of view of the first camera 104a. In another example, … a front image of the first object 102a,” each criterion is based on telemetry data of the tracked object (vehicle) corresponding to the desired location, orientation, and distance of the object with respect to the cameras. McCoy, Paragraph 56.) Regarding Claim 18: “The method of claim 14, wherein the adjusting the subframe coordinates further comprises setting a tracking velocity for the subframe coordinates to match a direction of the at least one vehicle based on the camera image positions and the camera image velocities of the at least one vehicle.” (“For example, when the first object l02a moves out of the field of view of the selected first camera 104a, the processor 202 may adjust the pan, zoom, and/or tilt of the selected first camera 104a such that the first object 102a may remain in the field of view of the selected first camera 104a,” which sets tracking velocity for the camera image to track the object. McCoy, Paragraph 58. Similarly, see “Using the current (X, Y) location as well as the direction of movement, velocity and acceleration, the preferred embodiment controllably directs one or more (X, Y, Z) pan, tilt and zoom cameras to automatically follow the given player.” in Aman, Paragraph 76 and statement of motivation in Claim 14.) Regarding Claim 19: “The method of claim 14, further comprising discarding excess video imagery data and excess stills data of the linked video imagery and the linked stills of the video imagery based on the event score.” (“The controlling device may crop [discard imagery data in] an image captured by the selected first set of cameras based on a relative position of the plurality of objects with the image [example event score]. … crop an image and/or video signal …” McCoy, Paragraphs 14-15 and 84-85. Also note the embodiment in “The memory 204 may further store one or more images and/or video content captured by the cameras 104” thus discarding images and video data not stored. McCoy, Paragraph 43.) Regarding Claim 20: “The method of claim 14, wherein the at least one floating subframe includes metadata including subframe metadata and vehicle metadata including the event score.” (“The memory 204 may further be operable to store data associated with the objects 102 to be tracked. Examples of such data associated with the objects 102 may include, but are not limited to, metadata associated with the objects 102, locations of the objects 102, preference associated with the objects 102, and/or any other information associated with the objects 102. … The memory 204 may further store one or more images and/or video content captured by the cameras 104,” where the images have an encoded resolution, “The processor 202 may store the data received from the cameras 104 and the sensors 106 in the memory 204,” which include camera and thus subfame location / coordinates. McCoy, Paragraphs 41-42. See additional treatment of camera resolution and output image resolution in Paragraphs 84-85. Cumulatively note that where McCoy does not exhaustively list all metadata content that can be stored, McCoy does indicate that metadata storage is generally applicable to all related data that is available from the camera, the sensors, or from calculations: “may include, but are not limited to, metadata …” Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to store additional or substitute metadata content in connection with captured images, tracked objects, and detected events.) Claim 1 is rejected for reasons stated for Claim 14, and because prior art teaches: selecting the at least one vehicle based on a celebrity status of at least one vehicle driver; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, celebrity status of a vehicle driver is an example of metadata that identifies a tracked object. Prior art teaches this application: “The controlling device 108 may receive metadata identifying the first object 102a to be tracked. … Examples of the metadata associated with an object to be tracked may include, but are not limited to, a name of an object to be tracked, an image of an object to be tracked, a unique identifier associated with a object to be tracked … and/or any other information capable of identifying an object to be tracked.” McCoy, Paragraphs 32, 48. See statements of motivation and obviousness of substitution and automation of human activity in Claim 14 and Claim Construction above.) retrieving camera image velocity data and camera image position data of the at least one floating subframe and (“Using the current (X, Y) location as well as the direction of movement, velocity and acceleration, the preferred embodiment controllably directs one or more (X, Y, Z) pan, tilt and zoom cameras to automatically follow the given player.” Aman, Paragrap 176 and statement of motivation in Claim 14. See similarly, tracking locations and changes in location of the object relative to each camera in McCoy, Paragraphs 59-60. See statement of motivation in Claim 14.) transforming the camera image velocity data and the camera image position data based on an angle and a position of the second video camera of the plurality of video cameras (“Using the current (X, Y) location as well as the direction of movement, velocity and acceleration, the preferred embodiment controllably directs one or more (X, Y, Z) pan, tilt and zoom cameras to automatically follow the given player” with the camera image velocity, position (location, zoom), and angle (pan, tilt). Aman, Paragraph 176. See statement of motivation in Claim 14.) such that the at least one floating subframe is operable to continuously and smoothly track the at least one vehicle from the at least one floating subframe and the at least one second floating subframe; and (“to automatically follow the given player” Aman, Paragraph 176 or track the given object in McCoy, Paragraphs 59-60. See statement of motivation in Claim 14.) Regarding Claim 2: “The method of claim 1, further comprising determining a direction and a point of entry of the at least one vehicle into the field of view of the second video camera of the plurality of video cameras.” (“to determine a direction and a distance of the first object l02a relative to the cameras 104. In an embodiment, the processor 202 may determine the direction and the distance of the first object 102a relative to the cameras 104 based on a triangulation method,” in McCoy, Paragraph 51.) Claim 3 is rejected for reasons stated for Claim 17 in view of the Claim 1 rejection. Claim 4 is rejected for reasons stated for Claim 15 in view of the Claim 1 rejection. Claim 5 is rejected for reasons stated for Claim 16 in view of the Claim 1 rejection. Regarding Claim 6: “The method of claim 1, further comprising constructing the subframe coordinates with the camera image positions of the at least one vehicle centered within the subframe coordinates.” (Note that prior art accomplishes this by: “crop an image captured by the selected first set of cameras based on a position of the one or more objects 102 in the image.” McCoy, Paragraphs 128, 84-85.) Claim 7 is rejected for reasons stated for Claim 18 in view of the Claim 1 rejection. Claim 8, “A system for automatically tracking and analyzing imagery data of at least one vehicle on a racetrack” is rejected for reasons stated for Claim 14, because the method of Claim 14 implements the system elements of Claim 8 using the same “video event management system including a processor, a memory, and a database.” Claim 9 is rejected for reasons stated for Claim 18 in view of the Claim 8 rejection. Claim 10 is rejected for reasons stated for Claim 20 in view of the Claim 8 rejection. Regarding Claim 11: “The system of claim 8, further comprising at least one data acquisition and positioning system including a Global Positioning System (GPS) antenna and a telemetry antenna attached to the at least one vehicle, wherein the video event management system provides telemetry data of the at least one vehicle.” (“the processor 202 may determine, in real time, the current location of the person to be tracked based on one or more signals received from the sensors 106 [general telemetry], such as a GPS sensor [operating by the means of an antenna].” McCoy, Paragraph 63. “any objects on the soccer field 300 may be tracked … The first GPS sensor 306a, the second GPS sensor 306b, and the third GPS sensor 306c may be coupled to collars of the shirts worn by the first player 304a, the second player 304b, and the third player 304c, respectively. … In an embodiment, a Bluetooth sensor (not shown in FIGS. 3A, 3B, and 3C) may be embedded” which is also operating by the means of an antenna. McCoy, Paragraphs 88-89.) Claim 12 is rejected for reasons stated for Claim 15 in view of the Claim 8 rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Mar 09, 2023
Application Filed
Jul 15, 2023
Non-Final Rejection — §103
Oct 20, 2023
Response Filed
Nov 04, 2023
Final Rejection — §103
Jan 09, 2024
Response after Non-Final Action
Jan 29, 2024
Response after Non-Final Action
Feb 06, 2024
Request for Continued Examination
Feb 15, 2024
Response after Non-Final Action
Mar 07, 2024
Non-Final Rejection — §103
Mar 20, 2024
Response Filed
Jun 13, 2024
Final Rejection — §103
Jul 25, 2024
Response after Non-Final Action
Aug 04, 2024
Response after Non-Final Action
Aug 20, 2024
Request for Continued Examination
Aug 25, 2024
Response after Non-Final Action
Sep 03, 2024
Non-Final Rejection — §103
Nov 25, 2024
Response Filed
Mar 15, 2025
Final Rejection — §103
Apr 14, 2025
Request for Continued Examination
Apr 23, 2025
Response after Non-Final Action
May 03, 2025
Non-Final Rejection — §103
Aug 06, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103
Jan 29, 2026
Request for Continued Examination
Feb 01, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548733
Automating cryo-electron microscopy data collection
2y 5m to grant Granted Feb 10, 2026
Patent 12489911
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, RECEIVING APPARATUS, AND TRANSMITTING APPARATUS
2y 5m to grant Granted Dec 02, 2025
Patent 12477146
ENCODING AND DECODING METHOD, DEVICE AND APPARATUS
2y 5m to grant Granted Nov 18, 2025
Patent 12452404
METHOD FOR DETERMINING SPECIFIC LINEAR MODEL AND VIDEO PROCESSING DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12432328
SYSTEM AND METHOD FOR RENDERING THREE-DIMENSIONAL IMAGE CONTENT
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
35%
Grant Probability
59%
With Interview (+23.8%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month