Prosecution Insights
Last updated: April 19, 2026
Application No. 18/577,256

A DATA PROCESSING METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT IN VIDEO PRODUCTION OF A LIVE EVENT

Non-Final OA §103§112
Filed
Jan 06, 2024
Examiner
BITAR, NANCY
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Spiideo AB
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
786 granted / 946 resolved
+21.1% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
978
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 946 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Claim 34 includes the claim limitation “a data receiving unit; a sensor data processing unit; a video obtaining unit; a movement pattern analysis unit and a pattern identification unit ” have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a linking word “ configured to” coupled with functional language respectively recited after each of the aforementioned claim limitations, without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: see figure 1B and corresponding text. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f). For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). The following is a quotation of the fourth paragraph of 35 U.S.C. 112: Subject to the [fifth paragraph of 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 35 is rejected under 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The claim teaches “A non-volatile computer program product stored on a tangible computer readable medium and comprising computer code for performing the method according to claim 19 “. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 19-35 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US 2020/0215410) in view of Dixler et al (US 20210315084) As to claim 19, Dixler et al teaches the processing method in video production of a live event involving a plurality of persons acting in a real-world target field, the data processing method comprising: receiving wireless communication signals that contain respective sensor data obtained for the plurality of persons (sensor aligner 46; figure 4), the sensor data including motion-triggered data for each person ( MOTUS and ZEPP may attach sensors (e.g., accelerator and gyro) to the bat to capture a baseball swing motion including speed, orientation, time to contact, paragraph [0032]); processing the motion-triggered data in the received wireless communication signals to determine respective first movement patterns for at least some of the plurality of persons as they act in the real-world target field(The information from the sensor hub may include or be combined with input data from user and/or participant devices, paragraph [0027-0028]; align sensor data with the swing motion captured in a video in a swing metric application may include a player wearing a wearable jersey with embedded sensors and turning the jersey on in preparation for batting practice at block 61, paragraph [0039]); obtaining a video stream of the real-world target field ( Aligning the sensor motion data with the corresponding motion in the video at the same time the motion is occurring (e.g., as determined by image processing and/or computer vision action recognition) may provide a better user experience.; paragraph [0034 and figure 5); processing the video stream to determine respective second movement patterns for at least some of the plurality of persons as they act in the real-world target field; analyzing the first movement patterns and the second movement patterns for matches between them (the logic is further to identify two or more participants in the video, associate each participant with a sensor worn by the participant, and overlay sensor-related information corresponding to the associated participant in the video., paragraph [0072]).While Li et al teaches the limitation above Li et al fails to teach” for each particular second movement pattern that matches a particular first movement pattern, identifying a person having the particular second movement pattern as being associated with the sensor data that comprises the motion- triggered data from which the particular first movement pattern was determined. “ However, Deixler teaches determine a first movement pattern from said sensor data; obtain communication data comprising wireless communication signals exchanged between electronic devices of a wireless network within said space; determine a second movement pattern from said communication data; determine whether said first movement pattern matches with said second movement pattern, so as to detect the object in said space; and perform an action upon determining a match ( abstract) .Deixler et al clearly teaches The identification data may for example be within a dataset together with the sensor data, or be part of the sensor data. The identification data may be sent to the device by a portable device comprising said portable sensor and determination of the second movement pattern (via e.g. RF-based sensing( paragraph [0042-0043]).Additionally, Deixler teaches the processor 213 determines whether said first movement pattern 91 matches with said second movement pattern 82, so as to detect the object 80 within said space 30(paragraph[0081]). It would have been obvious to one skilled in the art before filing of the claimed invention to use Deixler processor to identify the object by using the motion data in order to uniquely identify users and detect and quantify the presence/motion of people and other objects within an area of interest. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As to claim 20, Li et al teaches the data processing method as defined in claim 19, wherein the motion-triggered data in the sensor data obtained for each person are data from a gyro, accelerometer, magnetometer or inertial measurement unit comprised in a sensor device attached to the body of the person ( sense engine may include a sensor hub communicatively coupled to two dimensional (2D) cameras, three dimensional (3D) cameras, depth cameras, gyroscopes, accelerometers, inertial measurement units (IMUs), first and second order motion meters, location services, microphones, proximity sensors, thermometers, biometric sensors, paragraph [0027-0028]). As to claim 21, Li et al teaches the data processing method as defined in claim 19, wherein, in addition to motion-triggered data, the sensor data obtained for each person also comprises biometrical data(biometric sensors, paragraph [0028]). As to claim 22, Li et al teaches the data processing method as defined in claim 19, further comprising: producing an output video stream from the video stream, the output video stream having one or more computer-generated visual augmentations associated with one or more of the identified persons (the sensor(s) also output location on the field information, the visual object location may be matched to the sensor location to extract the right sensor data. The method 90 may then include overlaying the sensor-related information data near or on top of the selected player in the video with the support of player tracking at block 95, paragraph [0043]; FIG. 8 shows one snapshot of a basketball game where some embodiments may improve the user experience by overlaying the latest statistical data for each player on the video in the context of augmented reality and virtual reality). As to claim 23, Li et al teaches the data processing method as defined in claim 22, wherein the one or more computer-generated visual augmentations include a reproduction of biometrical data comprised in the sensor data obtained for the identified person(FIG. 8 shows one snapshot of a basketball game where some embodiments may improve the user experience by overlaying the latest statistical data for each player on the video in the context of augmented reality and virtual reality). As to claim 24, Li et al teaches the data processing method as defined in claim 23, wherein the one or more computer-generated visual augmentations include a reproduction of some or all of the motion-triggered data obtained for the identified person (FIG. 8 shows one snapshot of a basketball game where some embodiments may improve the user experience by overlaying the latest statistical data for each player on the video in the context of augmented reality and virtual reality. For example, the bounding box 82 around player with jersey number 15 (e.g., autodetected or manually checking his jersey number) may correspond to a player identification (ID). Some embodiments may use this ID to extract the corresponding sensor data. Some embodiments may overlay statistical information 84 on the screen near the player location. The statistical information 84 may follow the player around the screen as the player changes locations on the screen. The position of the displayed information relative to the player may change based on the player's location on the screen and other contextual information such as the location of the basket, the location of other players, etc. What information is displayed and various display location preferences may be user configurable, paragraph [0041]). As to claim 25, Li et al teaches the data processing method as defined in claim 19, further comprising: streaming the output video stream onto a data network for allowing a plurality of client devices to receive and present the output video stream on a display of each client device(drivers (not shown) may comprise technology to enable users to instantly turn on and off the platform 702 like a television with the touch of a button after initial boot-up, when enabled, for example. Program logic may allow the platform 702 to stream content to media adaptors or other content services device(s) 730 or content delivery device(s) 740, paragraph [0060]). As to claim 26, Dixler et al teaches the data processing method as defined in claim 19, wherein the analyzing of the first movement patterns and the second movement patterns for matches between them comprises: generating a machine learning based correlation factor(Machine learning may e.g. further educate the device in recognizing said movement patterns. Said movement pattern may e.g. be a gesture, e.g. a gesture of a human, paragraph [0016]), the machine learning based correlation factor being trained on said motion-triggered data and said video stream, the machine learning based correlation factor recognizing one or more correlations between the first and second movement patterns(the processor determines whether the first movement pattern matches (or corresponds) with said second movement pattern. An accurate match may be found, which is less susceptible to false positives or false triggers, because the object detected via the (e.g. RF-based sensed) second movement pattern may be directly correlated and/or confirmed with the object detected via the (wearable-sensor-sensed) first movement pattern, paragraph [009]); and associating the one or more correlations with a relevance level including normalized values of different actions associated with the plurality of persons acting in the real-world target field, wherein said one or more correlations are sorted based on respective relevance levels ( RF based sensing can perform true presence detection; for true presence detection, RF based sensing analyses a semi-static offset in RF disturbance compared to a known default background without the person (i.e. baseline) (e.g. determined over a long period of time by machine learning). Presence may be inferred by observing a shift in absolute values of the parameter(s) of the wireless communication signals compared to said default background without the person (i.e. baseline); because the properties of the person (even when not moving) may interfere with the wireless communication signals and affect the value of a parameter thereof, paragraph [0031]) . As to claim 27, Dixler et al teaches the data processing method as defined in claim 19, wherein the motion-triggered data in the sensor data as well as video frames of the obtained video stream comprise time stamps, and wherein the analyzing of the first movement patterns and the second movement patterns for matches between them is temporally confined by use of the time stamps (The second movement pattern can thereafter be assessed to determine whether it matches the first movement pattern. Moreover, said measuring of a change of a parameter may be performed during a time window, paragraph [0027]). As to claim 28, Dixler et al teaches the data processing method as defined in claim 26, wherein the motion-triggered data in the sensor data as well as video frames of the obtained video stream comprise time stamps, and wherein the analyzing of the first movement patterns and the second movement patterns for matches between them is temporally confined by use of the time stamps, and wherein based on the machine learning based correlation factor, the method further comprises: defining a pattern matching threshold as a predetermined value being indicative of a minimum performance requirement of the analyzing step; and adjusting the time stamps of the motion-triggered data and/or the video frames upon the machine learning based correlation factor being below the pattern matching threshold (RF based sensing is capable to detect and classify movement signatures. This is achieved by verifying within a certain time window how much the parameter(s) of the wireless communication signals have varied with respect to a previous threshold/baseline, whether these changes significantly exceed a certain level such that they cannot be attributed to channel noise, whether the combined variations match a specific pattern known to only be attributable to motion, etc, paragraph [0030][0035]). As to claim 29, Li et al teaches the data processing method as defined in claim 19, wherein the processing of the video stream to determine respective second movement patterns for at least some of the plurality of persons as they act in the real-world target field involves: applying image recognition and object tracking functionality to a sequence of video frames of the video stream to single out and track different persons acting in the field (analyze and/or perform feature/object recognition on images captured by a camera. For example, machine vision and/or image processing may identify and/or recognize participants or objects in a scene (e.g., a person, an animal, a bat, a club, a ball, etc.). The machine vision system may also be configured to perform facial recognition, gaze tracking, facial expression recognition, action recognition, action classification, pose recognition, and/or gesture recognition including body-level gestures, arm/leg-level gestures, hand-level gestures, and/or finger-level gestures, paragraph [0030]); and deriving the respective second movement patterns from an output of the image recognition and object tracking functionality(To recognize the player that user taps, some embodiments may employ either jersey number, face recognition, or other marker recognition. Upon the tap, the method 90 may include performing player detection and recognition to identify who the player is, and then employing player tracking to track this visual object in the captured video at block 93. To detect and track the player, any useful technology may be used such as fast region-based conventional network (fast-RCN), kernel correlation filter (KCF), paragraph [0042]). As to claim 30, Li et al teaches the data processing method as defined in claim 19, wherein the sensor data comprises a unique identifier being adapted to uniquely distinguish a person among the plurality of persons appearing in the video stream, the method further comprising: analyzing the unique identifier for resolving ambiguities of said person among the plurality of persons appearing in the video stream to select a player of interest to the user 101, track the location of the selected player on the screen 105 as they move around the court, identify sensors associated with the selected player, and overlay metrics for the selected player on the screen 105 such that the overlay is near the selected player but does not obstruct the view of the selected player on the screen 105, paragraph [0044]). As to claim 31, Li et teaches the data processing method as defined in claim 30, wherein the unique identifier is used for resolving ambiguities resulting from a temporary incapability of the image recognition and object tracking functionality to single out and track different persons acting in the field in the sequence of video frames of the video stream (Other points in the swing (e.g., corresponding with the stages in FIG. 5) may also be synchronized with the overlay of different sensor-related information. For example, the swing metric application may pause at different swing stages and overlay the appropriate sensor-related information for each stage, figure 7 and 8 and paragraph [0041]). As to claim 32, Dixler et al teaches the data processing method as defined in claim 29, further involving: improving the performance of the image recognition and object tracking functionality based on feedback received from matches between the first and second movement patterns( obtaining communication data comprising wireless communication signals exchanged between electronic devices of a wireless network within said space; determining a second movement pattern from said communication data; determining whether said first movement pattern matches with said second movement pattern, so as to detect the object in said space; and performing an action upon determining a match, paragraph[0050]). As to claim 33, Dixler et al teaches the data processing method as defined in claim 19, wherein upon no match between the first and second movement patterns having been established for a particular time, the method further involves postponing a next iteration of the identifying of a person until one of a new first or second movement pattern has been determined( Such an action would not be provided if the match indicated that a person was detected, because the present invention allows for an accurate and personalized detection, paragraph[0074-0075]). The limitation regarding claim 34-35 has been addresses above. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mrs. Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NANCY . BITAR Examiner Art Unit 2664 /NANCY BITAR/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Jan 06, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599437
PRE-PROCEDURE PLANNING, INTRA-PROCEDURE GUIDANCE FOR BIOPSY, AND ABLATION OF TUMORS WITH AND WITHOUT CONE-BEAM COMPUTED TOMOGRAPHY OR FLUOROSCOPIC IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12597132
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597240
METHOD AND SYSTEM FOR AUTOMATED CENTRAL VEIN SIGN ASSESSMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597189
METHODS AND APPARATUS FOR SYNTHETIC COMPUTED TOMOGRAPHY IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591982
MOTION DETECTION ASSOCIATED WITH A BODY PART
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
91%
With Interview (+8.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 946 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month