Prosecution Insights
Last updated: April 19, 2026
Application No. 17/558,831

LIVE LECTURE AUGMENTATION WITH AN AUGMENTED REALITY OVERLAY

Final Rejection §103§112
Filed
Dec 22, 2021
Examiner
BODENDORF, ANDREW
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Chegg Inc.
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
66%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
25 granted / 94 resolved
-43.4% vs TC avg
Strong +40% interview lift
Without
With
+39.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
32 currently pending
Career history
126
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
24.5%
-15.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 94 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in response to the amendment filed October 6, 2025. Claims 1-3, 7-11, and 13-28 are pending, where claims 1 and 2 have been amended, claims 4-6 and 12 have been canceled, claims 25-28 have been newly added, and claims 13-20 are withdrawn. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a), except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994) The disclosure of the prior-filed application, Application No. 63/130,580, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) for one or more claims of this application. The provisional application lacks description and/or support relating to gestures. Moreover, the provisional application lacks any description and/or support for operations of independent claim 1 directed to recognizing the captured hand gesture as corresponding to at least one of a plurality of defined user customizable hands gesture commands, and automatically executing the corresponding defined user customizable hands gesture command on the AR connected device to interact with at least one of the video lecture content and the supplemental content to confirming the identity of the subject matter expert by a live demonstration using a camera of a mobile device or computer of the subject matter expert. Additional elements lacking any description and/or support include any of the further gesture limitations as recited in claims 2, 3, 7-9, 11, 14, 15, 18, 20, and 21. Accordingly, claims 1-3, 7-11, and 13-28 are not entitled to the benefit of the prior-filed application. For purposes of examination the effective filing date is December 22, 2021. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. In this instance claims 1-3, 7-11, and 21-28 recite a generic place holder “device” with recited functions, such as “displaying video lecture content and supplemental content” and “capturing a digital image” without any corresponding structure. Because these claim limitations are being interpreted under 35 U.S.C. 112(f), it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The structure corresponding to the AR device is a desktop, a laptop, a tablet, a mobile device, smartphone, a smart television, a wearable device, a virtual reality device, head-mounted displays, gaming systems, AR glasses (See, e.g., ¶¶22,24). The structure corresponding to the device for capturing a digital image is a digital camera (See, e.g., ¶91). If applicant does not intend to have this limitation interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation to avoid it being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation recite sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 25 and 26 are rejected under 35 U.S.C. 112(a), as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. Specifically, the limitations: 1) “recognizing the captured hand gesture is limited to a particular hand gesture command field” found in claim 25; and 2) “wherein automatically initiating recording a snippet of the video lecture content displayed by the AR device and launching an application for creating a user-generated note on the AR device are in response to a historical snippet recording frequency and a historical user-generated note creation frequency exceeding respective frequency thresholds” found in claim 26 recite NEW MATTER. With regard to limitation 1) the specification at ¶¶91-94 describe a hand gesture command field 915. For example, ¶92 describes “Accordingly, as a user moves their left hand, right hand, or both hands within the hands gesture command field 915, the image capturing device can detect this movement of the user's hand(s) to recognize if the fingers (and palms) are positioned in a particular gesture that is intended to convey information, namely a hands gesture command. For example, the user may place their left hand within the hands gesture command field 915, and motion their hand by extending the index finger outward, extending the thumb upward (contracting the other fingers inward to touch to inside of the palm), and facing the palm inward (e.g., towards the AR connected device display 901). The user's hand gesture can be captured, for example, by a front-facing digital camera of the smartphone. By capturing and analyzing the imaging of the user's hand motion within the hand gestures command field 915, the hand gesture made by the user can be recognized by the system as representing a corresponding hands gesture command that has been previously defined by the user and maintained by the system (shown as library of user customizable hands gesture commands 920).” However, nothing described in the specification limits recognizing captured hand gestures to a particular hand gesture command field 915. Instead, the specification describes that system is capable of recognizing gestures within this field; however, no exclusion of other areas/fields is described. Therefore, the specification does not provide a written description supporting the limitations of claim 25. With regard to limitation 2), in particular “threshold” and “frequency,” a review of the specification discloses the only two instances of these terms at ¶¶40 and 60. ¶40 describes “the conviction conv(x-'y) is given by a ratio of the expected frequency of concept x occurring in a document without concept y (assuming x and y are independent concepts) to the observed frequency of x without y, or P(x)P(not y)/P(x and not y).” ¶60 describes “At 520, learning profile module 170 uses the user's preferences from 510 to determine other user records with similar preferences. In some implementations, learning profile module 170 compares the user's preferences with other users' preferences, and interprets a match over a certain threshold between these preferences as meaning that users have similar preferences.” However, the specification is silent with regard to any “historical snippet frequency” or “frequency thresholds” or comparison to frequency thresholds, or taking any action with regard to snippet frequency or frequency thresholds. Therefore, the specification does not provide a written description supporting the limitations of claim 26. As a result, the amended claims 25 and 26 contain subject matter which lacks adequate written description, and for at least these reasons, claims 25 and 26 are found to fail the written description requirement. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 3, 7-11, and 20-24 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. In re claims 3, 7-9, and 21, the claims recite “hands gesture.” This language confusing. Hands gesture would imply that the gesture includes multiple hands of a user (plural); however, claim 1 previously refers to a hand gesture (singular). Therefore, it is unclear whether the hands gesture is plural or singular. In addition, it is unclear if to the hand gestures of claim 1 or the same as or different that the hands gestures of claims 3, 7-9, and 21. In re claims 3, 7-9, 11, 21, and 22 the claims recite “the AR connected device.” There is insufficient antecedent basis for this limitation in the claims. In re claim 3, the claim recites “the plurality of defined user customizable hands gesture commands.” There is insufficient antecedent basis for this limitation in the claim. In re claims 7-9, 11, 20, and 21, the claims recite “the corresponding defined user customizable hands gesture command.” There is insufficient antecedent basis for this limitation in the claims. Claims 9, 10, 11, and 21-24 depend from a rejected base claim, and therefore are rejected for at least the reasons provided for the base claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 7, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 11,262,885 by Burckel et al. (“Burckel”) in view of US Publication No. 2022/0374585 by Wang et al. (“Wang”) and further in view of US Publication No. 2018/0081447by Gummadi et al. (“Gummadi”). In re claim 1, Burckel discloses a system [Fig. 1] comprising: an augment reality (AR) connected device displaying video lecture content and supplemental content displayed as an AR overlay in relation to the video lecture content [Fig. 1, Fig. 2B/2C, col. 7, ll. 23-53, col. 8, ll. 58-68, col. 9, ll.22-45, col. 10, l. 42- col. 11, l. 24, among others, describes a video of a lecture by a teacher transmitted to and displayed by an AR device of students with ability to add notes and annotations (supplemental content) which is presented on the video lecture content (overlay)]; an image capturing device coupled to the AR connected device, the image capturing device capturing a digital image including a hand gesture [Fig. 5, 6, Fig. 10, Fig. 23A, col. 7, ll. 23-53, col. 15, l. 49-col., claim 5 describe an image capture device (e.g., a camera), including tracking hands to determine hand gestures in a virtual 3D space]; and one or more computer processors coupled to the AR connected device and the image capturing device, wherein the one or more computer processors execute instructions that cause the one or more processors to [Fig. 60 shows CPU 6002 and memories #6004, 6006, 6008]: recognize the captured hand gesture as corresponding to at least one of a plurality of defined gesture commands, wherein each defined hand gesture is mapped to a particular analysis or recording action [Fig. 5, 6, Fig. 10, Fig. 23A, Fig. 41, col. 7, ll. 23-53, col. 15, l. 27-col. 19, l. 9, col. 38, l. 47-col 39, l.3, claim 5, among others, describe tracking hands to determine a plurality of defined hand gestures (e.g., various tap gestures in a virtual 3D space); each gesture corresponds to a command to interact with the virtual environment. Defined commands include recording and annotation and notes, including custom commands (e.g., Custom Tap Symbol and macros using Tap-Blanks)], and responsive to recognizing the captured hand gesture automatically initiate an application for creating a user generated note on the on the AR connected device [Fig. 2C, Fig. 5, 6, Fig. 10, Fig. 23A, Fig. 41, col. 7, ll. 23-53, col. 9, ll. 21-68, col. 10, l. 43-col. 11, l. 4, col. 15, l. 27-col. 19, l. 9, col. 38, l. 47-col 39, l.3, claim 5, among others, describe tracking hands to determine hand gestures (e.g., various tap gestures) in a virtual 3D space that correspond to computer instructions (commands) to interact with the virtual environment, including custom commands (e.g., Custom Tap Symbol and macros using Tap-Blanks), including lecture content and supplemental content (e.g., annotations and notes among others)]. Burckel discloses capturing video content of lecture using tap boards in a working deck using an AR device. However, to the extent Burckel lacks recording a snippet of the video lecture content displayed by the AR connected device, Wang teaches an AR device 1990 and virtual learning scenarios, which automatically generate representative videos, annotated during recording of one or more original videos to indicate key ideas and concepts. In particular, a user (e.g., a presentation participant, such as a student) may generate of one or more (e.g., a set of) curated, searchable video content (e.g., summary, snippets) deemed important by the user with annotations and/or overlays on the video, see, e.g., ¶¶2-4, 54-57, 101, and 216-221. Burckel and Wang are both considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that providing interaction, organization, and annotation of data us AR devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel to include recording of video snippet content, as taught by Wang, in order to improve user experience, for example, personalizing the user's experience by allowing a user to curate, search, and find specific video content the user deems important without having to watch an entire presentation and/or to form study guides from material they deem important, see, e.g., ¶¶ 1, 57, 119, 149. Burckel, in view of Wang, teaches responsive to recognizing the captured hand gesture automatically initiating recording a snippet of the video lecture content displayed by the AR device; and responsive to recognizing the captured hand gesture automatically launch an application for creating a user-generated note on the AR device. Burckel in view of Wang lacks a teaching of a single gesture to perform both operations of recording a snippet and taking a note. However, Gummadi teaches mapping user a single customizable hand gesture to execute multiple operations [Fig. 2, ¶¶5-9, 27-29, 33]. Burckel, Wang, and Gummadi are considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that use hand gestures to invoke operations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel in view of Wang to map multiple operations of recording a snippet and taking a note to a single gesture, as taught by Wang, in order to improve user experience, for example, giving the user more control over their device for hands free operation that can be customized specifically to the user and/or to reduce the number of gestures need to operate a device, see, e.g., ¶¶ 2, 3, 7. In re claim 2, Burckel discloses the image capturing device captured the digital image including the hand gesture within a defined area of a field in front of the AR device [Fig. 6., col. 17, ll. 51-55¶¶202, 219, 242, among others, describe gestures in an AR/VR environment are defined as one or more outstretched fingers 622 tracked in the user's field of view 624]. In re claim 7, Burckel discloses capturing video content of lecture using tap boards in a working deck using an AR device. However, to the extent Burckel lacks recording a snippet of the video lecture content displayed by the AR connected device, Wang teaches an AR device 1990 and virtual learning scenarios which automatically generate representative videos, annotated during recording of one or more original videos to indicate key ideas and concepts. In particular, a user (e.g., a presentation participant, such as a student) may generate of one or more (e.g., a set of) curated, searchable video content (e.g., summary, snippets) deemed important by the user with annotations and/or overlays on the video, see, e.g., ¶¶2-4, 54-57, 101, 216-221, among others. Burckel and Wang are both considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that providing interaction, organization, and annotation of data us AR devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel to include recording of video snippet content, as taught by Wang, in order to improve user experience, for example, personalizing the user's experience by allowing a user to curate, search, and find specific video content the user deems important without having to watch an entire presentation and/or to form study guides from material they deem important, see, e.g., ¶¶ 1, 57, 119, 149. In re claim 25, Burckel discloses, wherein recognizing the captured hand gesture is limited to a particular hand gesture command field [Fig. 6., col. 17, ll. 51-55, among others, describe gestures in an AR/VR environment are defined as one or more outstretched fingers 622 tracked in the user's field of view 624. The gesture is limited to the field 624 as shown in Fig. 6] . Claims 3, 8-11, and 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over Burckel in view of Wang and Gummadi and further in view of US Publication No. 2021/0247846 by Shriram et al. (“Shriram”). In re claim 3, Burckel discloses macros and tap blanks, which are implemented using hand gestures, that may be customized to provide user defined interactions with the tapisphere. However, to the extend Burckel lacks defined user customizable hands gestures to command on the AR connected device, Shriram teaches an AR system 110 with gesture tracking Application 130 and module 220 including a user profile stored within the database 140 may store a user-specified name of a custom hand gesture, images of the customized hand gesture taken by the user using the mobile client, and a user-specified mapping of a gesture to the customized hand gesture, ¶¶26,31. In addition, Burckel lacks, but Shriram teaches a gesture database storing the plurality of defined user customizable hands gesture commands, wherein each of the plurality of defined user customizable hands gesture commands defines a correlation between a defined hands gesture and one or more actions to be performed on the AR connected device in response to executing the respective defined user customizable hands gesture command [¶¶25,26,31, among others, teaches a database 140 for hand gesture commands that are customizable including a user-specified mapping of a gesture to the customized hand gesture]. Burckel and Shriram are both considered to be analogous to the claimed invention because they are in the same field of AR devices with hang gestures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel and include a gesture database storing the plurality of defined user customizable hands gesture commands, as taught by Shriram, in order to improve user experience, for example, by personalizing the user's experience using the AR client and/or to reduce processing cycles and power consumption, see, e.g., ¶¶ 26, 39. In re claim 8, Burckel discloses wherein automatically executing the corresponding defined user customizable hands gesture command on the AR connected device to interact with at least one of the video lecture content and the supplemental content includes capturing a user-generated note [Fig. 2 C, col. 8, l. 58-col. 9, l. 2, col. 9, ll. 38-68, col. 10, l. 43-col. 11, l. 3, among others, describe user gestures to annotate lecture and supplemental content]. In re claim 9, Burckel discloses automatically launching an application for creating the user-generated note on the AR connected device based on a type of user-generated note to be captured [Figs. 2C, 45, col. 9, ll. 38-68, col. 10, ll. 43-68, among others, describes User generated annotations launch different apps based on the type of note, for example, Glyphs, Vignettes, Voice commands, hierarchical annotations, private annotations, tracer, marker, snap shapes, tap shapes, to name a few]. In re claim 10, Burckel discloses the application comprises at least one of: a voice recognition application, a stylus writing application, and a word processing application [Fig. 11, col. 7, ll. 38-41, col. 12, ll. 4-11, col. 17 ll. 14-30, col. 24, ll. 43-67, among others, describes voice user interface (VUI)]. In re claim 11, Burckel discloses automatically executing the corresponding user customizable hands gesture command to interact with the with at least one of the video lecture content and the supplemental content. However, Burckel does not explicitly disclose recording a snippet of the video lecture content displayed by the AR connected device to generate a snippet that is associated with a time period when the user-generated note is captured. However, Wang teaches recording a snippet of the video lecture content displayed by the AR connected device to generate a snippet that is associated with a time period when the user-generated note is captured [¶¶57,58, 101, 109, among others, describe using meta data to synchronize annotations with video snippets to form searchable video content]. Burckel and Wang are both considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that providing interaction, organization, and annotation of data us AR devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel to generate a snippet that is associated with a time period when the user-generated note is captured, as taught by Wang, in order to improve user experience, for example, personalizing the user's experience by allowing a user to curate, search, and find specific video content the user deems important without having to watch an entire presentation and/or to form study guides from material they deem important, see, e.g., ¶¶ 1, 57, 119, 149. In re claim 21, Burckel discloses wherein automatically executing the corresponding defined user customizable hands gesture command on the AR connected device to interact with at least one of the video lecture content and the supplemental content includes capturing a user-generated question [Fig. 32, col. 35, ll. 45-63 , among others, describes capturing questions during lecture scenario]. In re claim 22, Burckel discloses capturing the user-generated question includes automatically launching an application for creating the user-generated question on the AR connected device based on the type of user-generated question to be captured [Fig. 32, col. 35, ll. 45-63 , among others, describes capturing questions during lecture scenario based on type, for example, hold triggers and drip triggers using ask tap symbol]. In re claim 23, Burckel discloses the launched application comprises at least one of: a voice recognition application, a stylus writing application, and a word processing application [Fig. 11, col. 7, ll. 38-41, col. 12, ll. 4-11, col. 17 ll. 14-30, col. 24, ll. 43-67, among others, describes voice user interface (VUI)]. In re claim 24, Burckel discloses capturing the user-generated question. Burckel lacks, but Wang teaches, recording a snippet of the video lecture content [¶¶2-4, 54-58, 61, 101, among others, describe recording snippet of video lecture content (e.g., presentations)]. Burckel and Wang are both considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that providing interaction, organization, and annotation of data us AR devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel to include recording of video snippet content, as taught by Wang, in order to improve user experience, for example, personalizing the user's experience by allowing a user to curate, search, and find specific video content the user deems important without having to watch an entire presentation and/or to form study guides from material they deem important, see, e.g., ¶¶ 1, 57, 119, 149. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Burckel in view of Wang and Gummadi and further in view of US Publication No. 2021/0399911 by Jorasch et al. (“Jorasch”). In re claim 27, Burckel discloses one or more computer processors execute instructions that cause the one or more processors to: first and second captured hand gestures including recognizing the gestures and executing corresponding commands, such as record. Burckel discloses capturing video content of lecture using tap boards in a working deck using an AR device. However, to the extent Burckel lacks recording a snippet of the video lecture content displayed by the AR connected device, Wang teaches an AR device 1990 and virtual learning scenarios which automatically generate representative videos, annotated during recording of one or more original videos to indicate key ideas and concepts. In particular, a user (e.g., a presentation participant, such as a student) may generate of one or more (e.g., a set of) curated, searchable video content (e.g., summary, snippets) deemed important by the user with annotations and/or overlays on the video, see, e.g., ¶¶2-4, 54-57, 101, 139, 149, 151, 216-221, among others). Burckel and Wang are both considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that providing interaction, organization, and annotation of data us AR devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel to include recording of video snippet content, as taught by Wang, in order to improve user experience, for example, personalizing the user's experience by allowing a user to curate, search, and find specific video content the user deems important without having to watch an entire presentation and/or to form study guides from material they deem important, see, e.g., ¶¶ 1, 57, 119, 149.. Burckel in view of Wang teaches multiple gestures and recording a snippet of the video lecture content in response to a gesture, but doesn’t explicitly teach terminating the recording of the snippet of the video lecture content. However, Jorasch teaches a gesture to stop recording [¶¶2511 among others gesture to stop capturing image]. Burckel, Wang, and Jorsach all describe video capture and editing including use of gestures to control content procurement. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel in view of Wang to include a gesture to stop recording of video snippet content, as taught by Jorsach, in order to improve user experience, for example, improving user control by allowing user improved response speed that is faster than navigating through menus or searching for a way to stop, see, e.g., ¶¶2511. Claims 28 is rejected under 35 U.S.C. 103 as being unpatentable over Burckel in view of Wang and Gummadi and further in view of Jorsach and US Publication No. 2016/0366330 by Boliek et al. (“Boliek”). In re claim 28, Burckel discloses one or more computer processors execute instructions that cause the one or more processors to: first and second captured hand gestures including recognizing the gestures and executing corresponding commands, such as record. Burckel discloses capturing video content of lecture using tap boards in a working deck using an AR device. However, to the extent Burckel lacks recording a snippet of the video lecture content displayed by the AR connected device, Wang teaches an AR device 1990 and virtual learning scenarios which automatically generate representative videos, annotated during recording of one or more original videos to indicate key ideas and concepts. In particular, a user (e.g., a presentation participant, such as a student) may generate of one or more (e.g., a set of) curated, searchable video content (e.g., summary, snippets) deemed important by the user with annotations and/or overlays on the video, see, e.g., ¶¶2-4, 54-57, 101, 139, 149, 151, 216-221, among others). Burckel and Wang are both considered to be analogous to the claimed invention because they are in the same field of virtual learning environments that providing interaction, organization, and annotation of data us AR devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel to include recording of video snippet content, as taught by Wang, in order to improve user experience, for example, personalizing the user's experience by allowing a user to curate, search, and find specific video content the user deems important without having to watch an entire presentation and/or to form study guides from material they deem important, see, e.g., ¶¶ 1, 57, 119, 149. Burckel in view of Wang teaches recording of snippets but doesn’t explicitly teach automatically recording for a particular duration. However, Boliek teaches creating video highlights from recording of video data (i.e., Snippets) including using a particular duration [¶¶90, 91, 137, 292-295, 316]. Burckel, Wang, and Boliek all describe video capture and editing including use of gestures to control content procurement. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel in view of Wang to include a duration for recording of video snippet content, as taught by Boliek, in order to improve user experience, for example, by adapting content capture (e.g., duration) based on the context of the event being captured, see, e.g., ¶¶102,116,120. Burckel in view of Wang teaches multiple gestures and recording a snippet of the video lecture content in response to a gesture, but doesn’t explicitly teach terminating the recording of the snippet of the video lecture content. However, Jorasch teaches a gesture to stop recording [¶¶2511 among others gesture to stop capturing image]. Burckel, Wang, and Jorsach all describe video capture and editing including use of gestures to control content procurement. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Burckel in view of Wang to include a gesture to stop recording of video snippet content, as taught by Jorsach, in order to improve user experience, for example, improving user control by allowing user improved response speed that is faster than navigating through menus or searching for a way to stop, see, e.g., ¶¶2511. Response to Arguments Applicant's arguments filed October 6, 2025 have been fully considered. The objection to the drawings is withdrawn in view of Applicant’s amendments and remarks. The objection to the specification is withdrawn in view of Applicant’s amendments. The objection to the claims is withdrawn in view of Applicant’s amendments. The rejection under 112(b) is maintained for the reasons given above. Applicant did not provide any arguments other than that clarifying amendments were made. The rejection has been updated to reflect Applicant’s amendments to the claims. Applicant’s arguments with respect to rejections under Section 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed on the attached Notice of References Cited. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew Bodendorf whose telephone number is (571) 272-6152. The examiner can normally be reached M-F 9AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached on (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW BODENDORF/Examiner, Art Unit 3715 /XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Dec 22, 2021
Application Filed
May 27, 2025
Non-Final Rejection — §103, §112
Sep 02, 2025
Applicant Interview (Telephonic)
Sep 03, 2025
Examiner Interview Summary
Oct 06, 2025
Response Filed
Jan 22, 2026
Final Rejection — §103, §112
Apr 08, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592164
VIRTUAL REALITY TRAINING SIMULATOR
2y 5m to grant Granted Mar 31, 2026
Patent 12551757
MACHINE-LEARNED EXERCISE CAPABILITY PREDICTION MODEL
2y 5m to grant Granted Feb 17, 2026
Patent 12548467
ELECTRO MAGNETIC REFRESHABLE BRAILLE READER
2y 5m to grant Granted Feb 10, 2026
Patent 12536921
Segmented Alphanumeric Display Using Electromagnetic Microactuators
2y 5m to grant Granted Jan 27, 2026
Patent 12508472
TRACKING THREE-DIMENSIONAL MOTION DURING AN ACTIVITY
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
66%
With Interview (+39.6%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 94 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month