DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-14 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Every et al. (Pub No US 2022/0358405) in view of Raviv et al. (Pub No US 2025/0130822). Hereinafter, referenced as Every and Raviv, respectively.
Regarding claim 1, Every discloses a method comprising:
receiving, by a computing system, data for a game (sporting event, e.g. soccer), the data comprising at least one of tracking data or event data (Paragraphs [0030] [0075] figure 7; receive event data for a game 702, e.g. soccer);
determining, by the computing system, an occurrence of a trigger event within the game based on the data for the game (Paragraphs [0076] figure 7; generate a plurality of artificial intelligence driven metrics based on the event data 704 in order to generate a plurality of insights based on the artificial intelligence driven metrics 706);
providing the data for the game and the occurrence of the trigger event to a first machine (Figure 1; e.g. computing system 104), wherein the first machine generates a graphic based on the data for the game and the occurrence of the trigger event (Paragraphs [0032] [0082] figures 1 and 7; computing system 104 generates a graphical user interface that includes the event data and at least one insight 710);
receiving, from the first machine, the graphic based on the data for the game and the occurrence of the trigger event within the game (Paragraphs [0061] [0082] figures 3 and 7; insight 306 generation);
and generating, by the computing system, a visual element (e.g. insight graphical element 306) including the graphic for presentation within a user interface (e.g. GUI 300), the visual element being configured to include an interactive element (Figure 3, e.g. like 310, publish to social media 312, etc.) or be positioned adjacent to the interactive element within the user interface (Paragraphs [0061]-[0063] figure 3).
However, it is noted that Every is silent to explicitly disclose that the first machine may be a first machine learning model, wherein the first machine learning model is trained to generate a graphic.
Nevertheless, in a similar field of endeavor Raviv discloses that the first machine may be a first machine learning model, wherein the first machine learning model is trained to generate a graphic (Paragraph [0053]; machine-learning model may generate a graphic visual for each script segment that corresponds to a relevant video segment).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Every by specifically providing the elements mentioned above, as taught by Raviv, for the predictable result of taking advantage of machine learning models that automate repetitive tasks, boosting efficiency and scalability across multiple devices.
Regarding claim 2, Every and Raviv disclose the method of claim 1; moreover, Every discloses that the trigger event is a pre-determined trigger event type (Paragraph [0030]; tracking players, balls, referees, etc. in a sporting event, e.g. soccer).
.
Regarding claim 3, Every and Raviv disclose the method of claim 1; moreover, Every discloses dynamically determining, by the computing system and using a second machine learning model, the trigger event (Paragraph [0077] figures 3 and 7; insights 306 may be generated via one or more machine learning models based on the event data and the plurality of artificial intelligence driven metrics).
Regarding claim 4, Every and Raviv disclose the method of claim 1; moreover, Every discloses that the graphic is generated further based on one or more of user data, a statistic, or broadcast video data (Paragraph [0062] figure 3 and Claim 5 of the publication; insights may include options such as, like 310, publish to social media 312, etc., wherein insights are ranked based on learned preferences of end users).
Regarding claim 5, Every and Raviv disclose the method of claim 4; moreover, Every discloses that the user data represents one or more of user preference data or user behavioral data (Claim 5 of the publication; insights are ranked based on learned preferences of end users).
Regarding claim 8, Every and Raviv disclose the method of claim 1; moreover, Every discloses outputting, using a second machine learning model, the event data (Paragraph [0077] figures 3 and 7; insights 306 may be generated via one or more machine learning models based on the event data and the plurality of artificial intelligence driven metrics).
Regarding claim 9, Every and Raviv disclose the method of claim 1; moreover, Every discloses that the visual element is generated in less than 30 seconds (Paragraphs [0054] [0061] figure 3A; each graphical element insight 306 is generated in real time or near-real time as the event is progressing in the sporting game).
Regarding claims 10-14 and 17-18, Every and Raviv disclose all the limitations of claims 10-14 and 17-18; therefore, claims 10-14 and 17-18 are rejected for the same reasons stated in claims 1-5 and 8-9, respectively.
Regarding claims 19 and 20, Every and Raviv disclose all the limitations of claims 19 and 20; therefore, claims 19 and 20 are rejected for the same reasons stated in claims 1 and 3, respectively.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Every and Raviv further in view of Dubin et al. (Pub No US 2015/0058730). Hereinafter, referenced as Dubin.
Regarding claim 6, Every and Raviv disclose the method of claim 1; moreover, Every discloses receiving, using the computing system, broadcast video data for the game (Paragraphs [0054] [0061] figure 3A; each graphical element insight 306 is generated in real time or near-real time as the event is progressing in the sporting game); and determining, using a second machine learning model (Paragraph [0077] figures 3 and 7; insights 306 may be generated via one or more machine learning models based on the event data and the plurality of artificial intelligence driven metrics), a segment of the broadcast video data associated with the graphic (e.g. insight graphical element 306).
However, it is noted that Every and Raviv are silent to explicitly disclose that the visual element includes the segment of the broadcast video data or a link to the segment of the broadcast video data.
Nevertheless, in a similar field of endeavor Dubin discloses that the visual element includes the segment of the broadcast video data or a link to the segment of the broadcast video data (Paragraph [0095] figure 4A; graphical tiles 410 includes links 411, 413, etc. to the video recording of the notified plays).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Every and Raviv by specifically providing the elements mentioned above, as taught by Dubin, for the predictable result of allowing the user to view the actual game play in the notification, increasing satisfaction and further interactivity with the system.
Regarding claim 15, Every, Raviv and Dubin disclose all the limitations of claim 15, therefore, claim 15 is rejected for the same reasons stated in claim 6.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Every and Raviv further in view of Junkin et al. (Patent No US 8,176,518). Hereinafter, referenced as Junkin.
Regarding claim 7, Every and Raviv disclose the method of claim 1; moreover, Every discloses receiving the interactive element (Paragraphs [0061] [0062] figure 3A; insight 306 with interactive elements 310-314, in addition to graphical element 432 displaying player stats; e.g. most touches, paragraph [0067] figure 4B).
However, it is noted that Every and Raviv are silent to explicitly disclose that the interactive element represents at least one of a market-based prediction offer, an advertisement, a questionnaire, or a poll, wherein the interactive element was selected based on one or more of the visual element, the graphic, or user data.
Nevertheless, in a similar field of endeavor Junkin discloses that the interactive element represents at least one of a market-based prediction offer, an advertisement, a questionnaire, or a poll, wherein the interactive element was selected based on one or more of the visual element, the graphic, or user data (Col. 17 lines 12-41 figure 19; display region 1920 displays standings and region 1922 displays interactive questions to the viewer related to the sporting event).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Every and Raviv by specifically providing the elements mentioned above, as taught by Junkin, for the predictable result of allowing the user to answer questions related to sporting event, emerging their viewing experience even further into the intensity of the game.
Regarding claim 16, Every, Raviv and Junkin disclose all the limitations of claim 16, therefore, claim 16 is rejected for the same reasons stated in claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNIOR O MENDOZA whose telephone number is (571)270-3573. The examiner can normally be reached Mon-Fri 10am-6pm EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JUNIOR O. MENDOZA
Primary Examiner
Art Unit 2424
/JUNIOR O MENDOZA/Primary Examiner, Art Unit 2424