DETAILED ACTION
Status of the Application
This Office Action is in response to Application Serial 17/730,831. In response Examiner’s action mail dated July 30, 2025, Applicant submitted amendments and arguments mail dated October 30, 2025. Claims 1, 4, 6, 8, 9-11 are examined. Applicant amended claims 1 and 9. Claims 2, 3, 5, and 7 are canceled. Claims 12-17 are withdrawn. The pending claims are examined in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
Applicant did not provide an information disclosure statement for consideration.
Response to Amendments
Applicant’s amendments have been considered. Claims 1, 4, 6, 8-17 are pending in this application. Claims 1 and 9 are amended. Claims 2, 3, 5, and 7 are canceled.
Regarding the 35 U.S.C. 101, the Applicant’s amendments are not persuasive. The claims 1, 4, 6, 8, 10-17 are not patent eligible under 35 U.S.C. 101, see below.
Examiner submits Claim 9 is patent eligible. (Applicant could request an interview.)
Regarding the 35 U.S.C 103 rejection. Applicant cancelled claims 2, 3, 5, 7. Applicant amended claims 1 and 9. Claims 1, 4, 6, 8-17 are rejected under 35 U.S.C 103, see below.
Response to Arguments
Applicant’s arguments filed October 30, 2025 have been fully considered but they are not persuasive and/or moot in view of the revise rejections. Applicant’s argument will be considered herein below.
Rejection of claims under 35 U.S.C. 101
On pages 5-8, of the Applicant’s 35 U.S.C. 101 arguments, Applicant traverses:
1. Step 2A: Claim Is Not Directed to An Abstract Idea Applicant submits claim 1, as amended, is not directed to the abstract idea, it is directed to a specific improved technological method for implementing competent assessment using integrated computer components in a novel way. Applicant traverses the claimed specific combination of steps, integrating multimedia playback, user interaction timed to the playback, automatic data generation (timestamp), digital file modification (tagging), and structured database storage, is fundamentally technological. It cannot be practically performed by a human mentally or with pen and paper. A human cannot mentally “play” a multimedia file, simultaneously provide an interactive list, receive a selection tied to the exact moment of playback generate a precise digital timestamp for that moment, digitally “tag” the file, and create a linked digital database record. The claimed method requires the specific capabilities of a computer system operating on digital data. Therefore, the claim is not directed to the abstract idea of assessment itself, but rather to a specific technological implementation and improvement of that process. The focus is on the specific “how” – the coordinated interaction between the user interface, multimedia player, database, and processing logic to create a specific, verifiable, and digitally indexed record of competency linked to precise evidence within a multimedia file. This represents a practical application of assessment concepts, integrating them into a specific technologic process.
Examiner respectfully disagrees with the Applicant’s 35 U.S.C. 101 Step 2A arguments. At Step 2A the claims are examined is evaluated using a Two-Prong Inquiry to determine if a claim recites a judicial exception and to determine if the exception is integrated into a practical application of the exception.
At Step 2A Prong One, the claim is directed to a judicial exception. The claim is providing ... a list of predetermined competencies, receiving a user input indicate[ing] a selection of one or more predetermined competencies is provided, determining a time that demonstrate that the ability of the individual in performing the task, generating a timestamp, and storing the timestamp and the respective competence criteria. The limitations of playing/observing a media file of a person completing a task, and determining the person completed the task, and recording the time the task is complete is an evaluation, observation, and judgement. Thus, the limitations recite a mental concept. The limitations are describing gathering data generated from observing input of a user using a gamification system – game-like features for education. A timestamp can be a rubber timestamp or handwritten recording.
At Step 2A Prong two, the Application describes using an user interface to play and observe multimedia files and storing timestamps into a database. The claim limitations, instant application [063], and Figure 4 are assessing a particular task and identifying a particular time frame that satisfies the particular task. Observations like identifying particular tasks time frame timestamps by playing/observing media filed played on a computer system is adding the words “apply it” or an equivalent) within a judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05 (f).
Regarding improvements, the claims are using a computer to make an observation of recorded tasks. The Applicant is encouraged to integrated claim 9 and the elements in the instant application [063] –[065] into the claim limitations.
The claims are not directed to a practical application.
Step 2B: Claim 1 Recites Significantly More. Applicant submits even if claim 1 were considered directed to an abstract idea, it recites significantly more, transforming the idea into a patent-eligible application. The claimed combination of elements provides an inventive concept beyond merely automating a mental process on a generic computer.
Improved Technological Process: The claimed method provides:
Precise Integrated Evidence Linking a specific improvement to the technology of competency assessment and digital record-keeping. The Applicant submits, the method provides “instant recall” a specific technologic solution precise, integrated evidence linking by generating a timestamp when the user identifies a demonstrated competency during playback and storing the timestamp link in a database.
Specific Data Structure, amended claim 1 specifies storing, at a database a timestamp linked against the respective competency criterion.
Interactive User Interface Tied to Playback requires providing the competency list when the file is played and receiving the selection at the current play time and is not just generic input/output.
Applicant respectfully submits that claims 1, 4, 6, 8, 10 and 11 are directed patent-eligible subject matter under 35 U.S.C.101 and requests withdrawal of the rejection.
Examiner respectfully disagrees with Applicant’s Improved Technological Process arguments. At Step 2B, Examiner gives weight to all claimed additional elements in Prong-Two to determine whether the claims pertain to an improvement to the functioning of a computer, or technology or technological processes. The argued recall function is a function selected by a user, e.g., a user input selecting a link, instant application specification [065], [Figure 16]. Within the specification [065] –[067], the user is watching a video. A video is examined and reviewed, the Assessor, or another person reviewing the video, may select all the relevant tasks and criteria 24 at the time of observation in the video, which is the interactive user playback. The user is able to enter a user input …. the system 1 determines and embeds a timestamp. The Claims are uploading a file to a database, the files are timestamped, the timestamp provides a reference point to recall the video. Timestamping a document during a file upload is known process and a common process in computer functioning. At Step 2B the claims do not amount to significantly more.
Not Merely “Apply It” on a Generic Computer: The claim does not merely take an abstract idea and say it leverages specific computer capabilities. Integrated in a way specific to achieve an improved assessment process.
Examiner respectfully disagrees with Applicant’s not mere “apply it” on a generic computer argument. Examiner does not refer to the computer as generic. As evinced in Applicant’s instant specification [046] An on-line web-based environment 100 is defined as any suitable database, hardware, software, network, or application, including but not limited to, intranets, internets, or any other web-based environment available on a single computer, a network of computers, a local server, one or more hand- held mobile devices, or on an intranet or on the internet and world-wide-web. Applicant’s specification [0135] discloses the present invention may be implemented by using hardware only or by using software and a necessary universal hardware platform.
As previously argued, Examiner submits the claims are playing/observing media filed played on a computer system is adding the words “apply it” or an equivalent) within a judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05 (f). Therefore, Applicant’s arguments are not persuasive. At Step 2B the claims do not amount to significantly more
Transformation: The process transoms raw multimedia data and user input into a structured, indexed, and verifiable digital record.
Examiner respectfully disagrees with Applicant’s transformation argument. The Applicant’s instant specification does not teach transformation of raw multimedia and user input into a structured record, therefore transformation is not necessitated. Applicant argument is not persuasive.
The claims are not patent eligible. See 35 U.S.C. 101 rejection below.
Examiner submits, reviewing specification [065], [Figure 11B] Applicant’s claims appear related to media timestamping and media indexing. As claimed, user input to identify when the competency is completed. The user selects a point in the media when the competency is completed, which is timestamped. When the user uploads the media to a database, the file is timestamped. The claims also seems to involve gamification. The Applicant is encouraged to request an interview.
Rejection of claims under 35 U.S.C. 103
On pages 8-12, of the Applicant’s 35 U.S.C. 103 arguments, Applicant traverses claims 1,4,6,8,9 over Surpe in view of Packard fails to teach or suggest the claimed invention:
The combination of Surpe in view of Packard fails to teach or suggest key elements of claim 1:
Providing the List of Competencies During Playback for User Selection (neither reference teaches providing an interactive list of predetermined competencies linked to specific assessment criteria to a user while the multimedia file is playing).
Applicant’s specification [065] teaches a user scrolling through a predetermined list of competencies. A predetermined list is a checklist.
Receiving Selection at Current Play Time (neither reference teaches receiving a user input selecting a competency at the specific current play time of the multimedia file when the list is provided.)
Generating Timestamp Based on User Selection at Play Time (claim 1 generates a timestamp corresponding to the portion determined in accordance with the current play time where the user indicated the competency was demonstrated.)
Specific Tagging (claim 1 tags the multimedia file with the timestamp and the respective description of the selected competency.)
Specific Database Storage (amended claim 1 requires storing “ the timestamp linked against for the respective competency criterion…”). The art does not teach creating a specific database table recording a user-generated timestamp to a competency criterion identifier for assessment purposes.
Applicant submits the combination of Sturpe and Packard fails to teach or suggest the specific integrated process of claim 1.
Examiner acknowledges the Applicant’s arguments. Applicant’s necessitate grounds for a new amendment. In light of the Applicant’s arguments Baker (US 2015/0312,652 A1) is used to teach indexing a segment list, indexing video segments, start and end of segments, and hyperlinks to index table. Baker [05], [093]-[094]. Examiner does not find Applicant’s arguments and amendments persuasive. See prior art rejection below.
Dependent Claims 4, 6, 8, 9 Applicant submits
Claim 4: Applicant submits the combination of Packard and Sturpe does not suggest specific implementation of JavaScript currentTime detail within the context of Sturpe’s Assessment.
Examiner’s acknowledges Applicant’s arguments. Examiner submits software is coded using a software language. In this Application, the Applicant selected the JavaScript. (Although not relied on, Arntzen teaches implements in plain JavaScript and The HTML5 media clock supports modification of currentTime, pause/play, as well as adjustments to playback rate, and the sequencing of text tracks should always be consistent. Arntzen teaches timestamped messages., [section 8.1], [section 3.1, section 6 ] and (2016, JavaScript/HTML5: get current time of audio tag) illustrates using Java code to query a currentTime/timestamp.)
Mark (US 2015/0279424 A1) synchronizes File A with a portion of File B the audio replacement service may determine a global timestamp associated with the first frame of File A and the last frame of File A. For instance, the global timestamp may be included as metadata with File A and determined from the internal clock of the device used to record File A. Marck supports timestamping frames of audio files is an established feature of media files.
Claim 4 is not amended, therefore Examiner maintains the rejection.
Claim 6: Packard adds “segment tags” but this metadata associated highlighted segment for generation, not necessarily annotating the original multimedia file itself with specify timestamps and competency description as claimed.
Examiner has considered Applicant’s claim 6 argument. Applicant’s claims recites, “… the multimedia file is annotated in the multimedia file…” Applicant’s argument is explanatory but is not necessitated in the claimed limitation. Claim is not amended, therefore Examiner maintains the rejection.
Claim 8: Packard mentions OCR for interpreting on-screen graphics within the event. This provides not teaching or suggestion related to processing a separate text file provided as evidence, selecting a rectangular portion thereof based on user input, as required by claims 8 and 9.
Examiner has considered Applicant’s claim 8 argument. Applicant’s claim recites “…selecting a rectangular portion of the text image file...”. Examiner submits, the technology of OCR highlights in a rectangular area. Based on Applicant’s arguments Examiner understands the art is multimedia indexing. See claim 1 rejection and suggestions.
Examiner acknowledges Baker (US 2015/0,312,652 A1) indexes segment list to a video. Baker [005] identifies one or more video sequences for inclusion in the video highlight reel, a video sequence included in the video highlight reel where a segment, correlated to the video sequence, satisfies one or more predefined rules; (c) displaying an interactive script including a plurality of script segments, a script segment of the plurality of script segments matched to a video sequence identified for inclusion in the video highlight reel in said step. Baker discloses indexing video segments, and times associated with the segment.; Baker [093]-[094] teaches interactive script, video sequencing, videos are indexed to segments from the segment list. Each script segment may include hypertext or otherwise be hyperlinked to the index table created and stored in step 224. See Baker [005],[093]-[094].
Claims 8 and 9 are not amended, therefore Examiner maintains the rejection.
On page 11, the Applicant states the combination of Sturpe and Packard fails to teach or suggest the limitations of claims 4, 6, 8 and 9.
Examiner acknowledges Applicant arguments of each dependent claim. However, Applicant did not amend claims 4,6, 8. Claim 9 appears to be a grammatical amendment. However, in light of the arguments, claims 9 is rejected using Baker (US 2015/0,312,652 A1), also.
Withdrawal of all rejections under 35 U.S.C. § 103 is respectfully requested.
Examiner respectfully disagrees with the Applicant’s 35 U.S.C. 103 arguments. The Applicant’s amendments necessitate grounds for a new rejection.
Response to Amendments
Claims 1, 4, 6, 8, 9, 10, 11 are pending in this application. The claims 2, 3, 5, and 7 and cancelled. Claims 12-17 are withdrawn. Claims 1, 4, 6, 8, 9, 10, and 11 are amended.
Applicant’s amendments are not sufficient to overcome the 35 U.S.C. 101 rejection set forth in the previous action. The claims 1, 4, 6, 8, 10, and 11 are rejected under 35 U.S.C. 101, see below.
Claim 9 is patent eligible – See MPEP 2106.05 (e). Applicant is encouraged to request an interview Examiner.
Applicant’s amendments are not sufficient to overcome the 35 U.S.C. 103 rejection set forth in the previous action. The claims 1, 4, 6, 8, 9, 10, and 11 are rejected under 35 U.S.C. 103, see below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4, 6, 8,10, and 11 are process. (Claim 9 is patent eligible, see response to arguments, above.)
Claims 1, 4, 6, 8, 10, and 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims (claim 1 ) recites, “… playing, … that contains a recording of an individual performing a task; providing, …, a list of predetermined competencies, each predetermined competency defined by a respective description that is stored … and that includes a respective competency criterion; receiving, … at a current play time of the multimedia file .. and when the list of predetermined competencies is provided, a user input indicating a selection of one or more predetermined competency from among the list of predetermined competencies, the one or more predetermined competency indicating an ability of the individual to perform the task; and in response to the user input: determining, in accordance with the current play time, a portion of … that demonstrates the ability of the individual in performing the task; generating a timestamp corresponding … ; tagging … with the timestamp and the respective description of the one or more predetermined competency; and storing …, the timestamp and … the respective competency criterion of the one or more predetermined competency; … , and the respective description of the one or more predetermined competency. ” Claims 1, 4, 6, 8, 10, and 11, in view of the claim limitations, are recite determining a competency task is completed and tagging the observed competency with a timestamp the claims are concepts performed in the human mind (including an observation, evaluation, judgement, opinion). Accordingly, the claims recite certain mental processes, and thus, the claims are directed to an abstract idea under the first prong of Step 2A.
This judicial exception are not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of, “A method comprising, by a computer device including a user interface and coupled to a database”, “by the user interface, a multimedia file”, “at the database” in claim 1; however, when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recite adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05 (f)
The dependent claims recite the following additional elements that are not included in the independent claim:
Claim 4: “a JavaScript timestamp”, “a JavaScript currentTime”;
Claim 6: “text image file”, “by the computer device”;
Claim 10: “a first multimedia file”, “a second multimedia file”, “as the database”.
Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, because the additional elements when considered both individually and as an ordered combination do not amount to significantly more. (See MPEP 2106.05 (f) Mere Instruction to Apply an Exception – Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct at 235).
At Step 2B, it is MPEP 2106.05 (d) – Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).
Examiner concludes that the additional elements in combination fail to amount to significantly more than the abstract idea based on findings that each element merely performs the same function (s) in combination as each element performs separately. The claim is not patent eligible. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
Dependent claims 4, 6, 8, 10, and 11 further narrow the abstract idea of independent claim 1. The claims 1, 4, 6, 8, 10, and 11 are not patent eligible.
Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 4, 6, 8, 10, and 11 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1, 4, 6, 8, 10, and 11 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Toliver [070]
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4, 6, 8, 9, is/are rejected under 35 U.S.C. 103 as being unpatentable over Sturpe (Scoring Objective Structured Clinical Examinations Using Video Monitors or Video Recordings) in view of Baker (US 2015/0,312,652 A1) and Packard (US 2016/0,105,708 A1).
Regarding Claim 1, (Currently Amended)
A method support assessing an ability of an individual in performing a task, the method comprising, by a computer device including a user interface and coupled to a database: playing, by the user interface, a multimedia file that contains a recording of an individual performing a task;
Sturpe teaches scoring methods for objective structured examinations (OSCE) using real-time observations via video monitors and observations of videotapes., Sturpe [objective]
providing, by the user interface, when the multimedia file is played, a list of predetermined competencies, each predetermined competency defined by a respective description that is competency criterion stored at the database and that includes a respective competency criterion;
Sturpe discloses a consultant rheumatologist observed a video recording of the encounter and independently scored each student’s performance., Sturpe [p.1 column 2 paragraph 2].
Sturpe discloses at the University of Maryland School of Pharmacy completed 3-station OSCEs as part of their coursework. Second-year students completed the OSCE as part of Patient-Centered Pharmacy Practice and Management II, a laboratory-based course in which students developed practice skills such as medication counseling and physical assessment. The third-year OSCE was conducted within Integrated Science and Therapeutics III/IV, a required general therapeutics course. Each OSCE was used to determine a percentage of each student’s final grade. Points for each station were awarded in an all-or-none fashion, based on the pass/fail cut point determination for that station., Sturpe [p. 2 column 1 paragraph 3] –[p.2 column 2 all].
receiving, by the user interface at a current play time of the multimedia file when the multimedia file is played and when the list of predetermined competencies is provided, in accordance with a user input indicating a selection of one or more predetermined competency from among the list of predetermined competencies at a current play time, the one or more predetermined competency indicating an ability of the individual to perform the task;
Sturpe discloses on the day of the examination, faculty investigators observed their assigned encounters and rated student performance from real-time video and audio feed using television monitors and headsets located in a control room. The video feed of the encounter was simultaneously videotaped. Thus, the real-time observations were replicated for use during the video observations. Each investigator observed 10 consecutive student encounters over a 3½ hour OSCE session. Each investigator attended 2 sessions and completed 20 observations. Approximately 1 month later, each investigator watched the video recording of the assigned encounters and re-rated student performance., Strupe [p. 2 column 2 all]
and in response to the user input: determining, in accordance with the current play time, a portion of the multimedia file that demonstrates the ability of the individual in performing the task;
Sturpe discloses the analytical checklist was scored differently in 15 of the 30 encounters observed. Checklist scores differed by no more than 2 points in all cases. For those encounters in which the pass/fail determination was different, students were more likely to receive a failing score when the video observation was used. Specifically, 4 students who passed upon real-time observation failed on video observation (13.3% of cohort) Strupe [p. 3 column 2 all]
…
and the portion of the multimedia file, the multimedia file, and the respective description of the one or more predetermined competency.
Sturpe discloses Analysis results of the P2 OSCE station analytical checklist are presented in Table 1. There was a high degree of agreement between the 2 observations, with an ICC(3,1) of 0.951. The analytical checklist was scored differently in 15 of the 30 encounters observed., ., Strupe [p. 3 column 2 all], [Table 1].
Although highly suggested, Sturpe does not explicitly teach:
… by a computer device including a user interface and coupled to a database …. determining, in accordance with the current play time …. generating a timestamp corresponding to the portion of the multimedia file; tagging the multimedia file with the timestamp and the respective description of the one or more predetermined competency depending from the respective competency criterion stored in the database; and storing, at the database: in a table of the database, the timestamp linked against to the respective competency criterion of the one or more predetermined competency; …
Baker teaches:
… by a computer device including a user interface and coupled to a database …. determining, in accordance with the current play time …. generating a timestamp corresponding to the portion of the multimedia file; tagging the multimedia file with the timestamp and the respective description of the one or more predetermined competency depending from the respective competency criterion stored in the database; and storing, at the database: in a table of the database, the timestamp linked against the respective competency criterion of the one or more predetermined competency; …
Baker [093]-[094] teaches interactive script, video sequencing, videos are indexed to segments from the segment list. Each script segment may include hypertext or otherwise be hyperlinked to the index table created and stored in step 224. See Baker [005],[093]-[094].
Baker [059] teaches a segment signature may be data describing a particular frame of video from the stored video of the event.; Baker [060] teaches each segment from the segment list will have an associated segment signature which describes a single point in the video of the event. In further embodiments, the segment signature may be a time in the video. That is, the video of the event begins at time t.sub.0, a first sequence starts at video run time t.sub.1, a second sequence starts at video run time t.sub.2, etc. The segment signature for a particular sequence may thus be the video run time at which that sequence begins (or ends)., Baker [059]-[060].
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Baker discloses media indexing. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with associating/indexing corresponding sequences from a video of an event for which the segment list, as taught by Baker, to come up with segments which are likely to be of greatest interest to a particular user., Baker [abstract].
Packard further teaches:
… in accordance with the current play time …. generating a timestamp corresponding to the portion of the multimedia file; tagging the multimedia file with the timestamp …, the timestamp …
Packard [0137], [355], [Figure 2B] discloses capturing timestamps. , Packard [0137], [355], [Figure 2B],
Packard [0235] teaches individual segments profiles… segment start time, and a default JSON file is created and saved containing ordered segments and summary information for the event, JSON., Packard [0229]-[0235], [Fig 4J], [Fig 4K]
(Although not relied on Arntzen teaches multimedia frameworks, API queries such as play from current position implements in plain JavaScript, Arntzen [section 3.1, section 6 ] and The HTML5 media clock supports modification of currentTime, pause/play, as well as adjustments to playback rate, and the sequencing of text tracks should always be consistent. Arntzen teaches timestamped messages., [section 8.1])
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Packard teaches highlight of events on media/websites. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, with capturing the time of segments (timestamps), as taught by Packard, to output customized highlights., Packard [007].
Regarding Claim 2, (Canceled)
Regarding Claim 3, (Canceled)
Regarding Claim 4, (Previously presented)
The method of claim 1 wherein the timestamp …
Sturpe teaches faculty investigators observed their assigned encounters and rated student performance from real-time video and audio feed using television monitors and headsets located in a control room. The video feed of the encounter was simultaneously videotaped. Thus, the real-time observations were replicated for use during the video observations., Sturpe [p. 3 column 1 paragraph 2]
(Real-time is a time. )
Although highly suggested, Sturpe does not explicitly teach:
a JavaScript timestamp and generating the timestamp involves a JavaScript currentTime method.
Packard teaches:
the timestamp a JavaScript timestamp and generating the timestamp involves a JavaScript currentTime method.
Packard [0137], [355], [Figure 2B] discloses capturing timestamps. , Packard [0137], [355], [Figure 2B],
Packard [0235] teaches individual segments profiles… segment start time, and a default JSON file is created and saved containing ordered segments and summary information for the event, JSON., Packard [0229]-[0235], [Fig 4J], [Fig 4K]
Packard [0103] discloses an example of such a client/server embodiment is a web-based implementation, wherein client device 206 runs a browser or app that provides a user interface for interacting with content (such as web pages, video content, and/or the like) from various servers 202, 214, 216, as well as data provider(s) 222 and/or content provider(s) 224, provided to client device 206 via communications network 204. Transmission of content and/or data in response to requests from client device 206 can take place using any known protocols and languages, such as Hypertext Markup Language (HTML), Java, Objective C, Python, JavaScript, and/or the like., Packard [0103]
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Packard teaches highlight of events on media/websites. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, with capturing the time of segments (timestamps), as taught by Packard, to output customized highlights., Packard [007].
(Although not relied on, Arntzen teaches implements in plain JavaScript and The HTML5 media clock supports modification of currentTime, pause/play, as well as adjustments to playback rate, and the sequencing of text tracks should always be consistent. Arntzen teaches timestamped messages., [section 8.1], [section 3.1, section 6] and (2016, JavaScript/HTML5: get current time of audio tag) illustrates using Java code to query a currentTime/timestamp. )
Regarding Claim 5, (Canceled)
Regarding Claim 6, (Previously presented)
The method of claim 1 wherein the portion of the multimedia file is …
See above, video Sturpe [p. 3 column 1 paragraph 2]
Packard further teaches:
the multimedia file is annotated in the multimedia file.
Packard teaches timed tags and add segment tags, Packard [0229]-[0235], [Figure 4J and the associated text].
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Packard teaches highlight of events on media/websites. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, with capturing the time of segments (timestamps), as taught by Packard, to output customized highlights., Packard [007].
Regarding Claim 7, (Canceled)
Regarding Claim 8, (Previously presented)
The method of claim 1 wherein: the portion of the multimedia file is a first portion of the multimedia file, …
See above, video Sturpe [p. 3 column 1 paragraph 2].
Sturpe does not teach:
the multimedia file includes a text image file, and the method further comprises, by the computer device, determining a second portion of the multimedia file by selecting a rectangular portion of the text image file.
Packard teaches:
the multimedia file includes a text image file, and the method further comprises, by the computer device, determining a second portion of the multimedia file by selecting a rectangular portion of the text image file.
Packard [191] teaches load 467 video data (such as optical character recognition (OCR)) obtained by reading and interpreting on-screen graphics, and create video Data (OCR Data).
Packard [0137], [355], [Figure 2B] discloses capturing timestamps. Packard [0235] teaches individual segments profiles… segment start time, and a default JSON file is created and saved containing ordered segments and summary information for the event, JSON., Packard [0234]-[0235]
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Packard teaches highlight of events on media/websites. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, with capturing the time of segments (timestamps) and OCR, as taught by Packard, to output customized highlights., Packard [007].
Regarding Claim 9, (Previously presented)
The method of claim 8 further comprising, by the computer device…
See above, video Sturpe [p. 3 column 1 paragraph 2]
Sturpe does not teach:
performing character recognition on the rectangular portion of the text image file, the second portion of the multimedia file including text obtained by the character recognition.
Baker teaches:
performing character recognition on the rectangular portion of the text image file, the second portion of the multimedia file including text obtained by the character recognition.
Baker [043] teaches an indexing engine 110 may analyze frames of the stored video. A software routine, for example employing known optical character recognition techniques, may be used to analyze a video frame to identify a game clock, which will generally be in a known format. For example, the game clock in a football game will have one or two numeric digits, a colon, and then two more numeric digits., Baker [043].
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Baker discloses media indexing. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with associating/indexing corresponding sequences from a video of an event for which the segment list, as taught by Baker, with capturing the time of segments (timestamps) and OCR, as taught by Packard, to to come up with segments which are likely to be of greatest interest to a particular user., Baker [abstract].
Packard further teaches:
performing character recognition on the rectangular portion of the text image filed, the second portion of the multimedia file including text obtained by the character recognition.
Packard [191] teaches load 467 video data (such as optical character recognition (OCR)) obtained by reading and interpreting on-screen graphics, and create video Data (OCR Data).
Packard [0137], [355], [Figure 2B] discloses capturing timestamps. Packard [0235] teaches individual segments profiles… segment start time, and a default JSON file is created and saved containing ordered segments and summary information for the event, JSON., Packard [0234]-[0235]
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Packard teaches highlight of events on media/websites. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, with capturing the time of segments (timestamps) and OCR, as taught by Packard, to output customized highlights., Packard [007].
Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sturpe (Scoring Objective Structured Clinical Examinations Using Video Monitors or Video Recordings) in view of Baker (US 2015/0,312,652 A1) and Packard (US 2016/0,105,708 A1) and in further view of Cudak (US 2015/0,363,156 A1).
Regarding Claim 10, (Previously Presented)
The method of claim 1 wherein: the multimedia file is a first multimedia file, and the method further comprises, by the computer device: linking a second multimedia file to the portion of the first multimedia file; and storing, at the database, the second multimedia file with the portion of the first multimedia file, the first multimedia file, and the description of the one or more predetermined competency.
See above, video Sturpe [p. 3 column 1 paragraph 2]
Sturpe does not explicitly teach:
linking a second multimedia file to the portion of the first multimedia file; and storing, at the database, the second multimedia file with the portion of the first multimedia file, the first multimedia file,
Cudak teaches:
linking a second multimedia file to the portion of the first multimedia file; and storing, at the database, the second multimedia file with the portion of the first multimedia file, the first multimedia file …
Cudak [013] discloses identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, and inserting a link into the multimedia file. The link points to the known audio file, specifies the portion of the separate audio file that matches the removed portion of the audio content, and identifies a point in the multimedia file where the portion of audio content was removed., Cudak [013], [abstract]
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Cudak teaches analyzing a multimedia file including audio content and video content. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, and inserting a link into the multimedia file, as taught by Cudak, so during playback of the multimedia file, the specified portion of the known audio file is played at the identified point in the multimedia file, Cudak [006].
(Examiner submits Baker teaches elements of these limitations, also.)
Regarding Claim 11, (Previously Presented)
The method of claim 1 wherein: the first task is a first task, the ability is a first ability, the one or more predetermined competency is a first predetermined competency, and the method further comprises, by the computer device, determining a probability that the multimedia file includes evidence of a performance of a second task required to show a second predetermined competency in a second ability to perform the second task.
See above Sturpe [p. 3 column 1 paragraph 2] and Cudak [abstract], [006], [013]
Sturpe discloses evaluating students in objective structures clinical examinations (OSCE). Cudak teaches analyzing a multimedia file including audio content and video content. It would have been obvious combine before the effective filing date, filming assessments of students, as taught by Sturpe,, with identifying a portion of the audio content that matches a portion of a separate audio file, removing the identified portion of the audio content from the multimedia file, and inserting a link into the multimedia file, as taught by Cudak, so during playback of the multimedia file, the specified portion of the known audio file is played at the identified point in the multimedia file, Cudak [006].
(Examiner submits Baker teaches elements of these limitations, also.)
Regarding Claims 12-17 is/ are withdrawn.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure
Deichmann EP (3009959-A2) discloses metadata OCR , to find video frames. the one or more communication flows for selecting a transformed document from the communication session.
(2016, JavaScript/HTML5: get current time of audio tag) illustrates using Java code to query a currentTime/timestamp.
Hauptmann (US 2018/0293313 A1 A1) teaches semantic features associated with a timestamp indicative of a time at which the semantic feature is presented during the audio-visual recording.
Arntzen (2016, Data-Independent sequencing with the timing object) discloses Java Script and currentTime.)
Mark (US 2015/0279424 A1) supports timestamping frames of audio files is an established feature of media files.
Fahmie (WO 2013/138,764 A1) teaches media tagging, identifying characteristics of a segment of media and applying a set of quantified data to the segment, including timestamps, unique participants included in segment, and content.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEA LABOGIN whose telephone number is (571)272-9149. The examiner can normally be reached Monday -Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on 571-270- 5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THEA LABOGIN/Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624