DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed February 3, 2026 have been fully considered but they are not persuasive.
Applicant argues (claims 1, 11 and 21) Drummond fails to disclose selecting at least one detector based on matching the user profile data and the AR device data to the plurality of requests.
In response, Drummond (Para 49) discloses the profile data 302 stores multiple types of profile data about a particular entity… Where the entity is an individual. Drummond (Para 51) further discloses database 124 further includes a travel parameters table 318 for storing respective travel parameters for users... it is possible for the travel parameters table 318 to be included as part of the profile data 302. Thus, each entity/user may have respective travel parameters associated therewith… Examples of travel parameters include… general locations, specific venues or landmarks…and/or topics of interest. Drummond (Para 97) discloses an object detection system sends to a messaging client object attributes matching travel parameters, where the matching meets a threshold comparison value. Therefore, Drummond discloses selecting at least one detector based on matching the user profile data and the AR device data to the plurality of requests.
Applicant argues (claims 1, 11 and 21) Drummond fails to disclose wherein each respective detector of the at least one detector is constrained to detect a subset of objects comprising a respective particular type of object and refrain from detecting objects that do not match the respective particular type of object
In response, Drummond (Para 97) discloses an object detection system sends to a messaging client object attributes matching travel parameters, where the matching meets a threshold comparison value. Drummond (Para 95) discloses for example, the object detection system 212 may detect object(s) in the captured image that relate to travel (e.g., a landmark, a piece of art, a product for sale at a particular venue, etc.). Additionally, Drummond (Para 123, 126; Fig. 7A) discloses identifying a landmark in an image and determining content to present in association with the landmark. Therefore, Drummond discloses wherein each respective detector of the at least one detector is constrained to detect a subset of objects comprising a respective particular type of object and refrain from detecting objects that do not match the respective particular type of object.
To the extent that the response to the applicant's arguments may have mentioned new portions of the prior art references which were not used in the prior office action, this does not constitute a new ground of rejection. It is clear that the prior art reference is of record and has been considered entirely by applicant. See In re Boyer, 363 F.2d 455, 458 n.2, 150 USPQ 441, 444, n.2 (CCPA 1966) and In re Bush, 296 F.2d 491, 496, 131 USPQ 263, 267 (CCPA 1961).
The mere fact that additional portions of the same reference may have been mentioned or relied upon does not constitute new ground of rejection. In re Meinhardt, 392, F.2d 273, 280, 157 USPQ 270, 275 (CCPA 1968).
Information Disclosure Statement
The information disclosure statement filed 02/07/2024 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information (e.g. Presentation for CGDI Workshop) referred to therein has not been considered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function.
Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function.
Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim limitation “means” has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “means” coupled with functional language, for example, “receiving… requests” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier.
Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim(s) 21 has/have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: an exchange server (Specification, Para 5).
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 , sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-19 and 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Virginia Drummond et al., US 2023/0274542 A1.
Independent claim 1, Drummond discloses a method comprising:
receiving, from an Augmented Reality (AR) device, user profile data and AR device data (i.e. messaging server system provides client device information and geolocation and social network information, e.g. user profile data – Para 25);
receiving, from a plurality of AR insertion requesters, a plurality of requests for running a plurality of detectors (i.e. web server processes third party application requests – Para 26);
selecting at least one detector based on matching the user profile data and the AR device data to the plurality of requests (i.e. The profile data 302 stores multiple types of profile data about a particular entity… Where the entity is an individual – Para 49; The database 124 further includes a travel parameters table 318 for storing respective travel parameters for users… it is possible for the travel parameters table 318 to be included as part of the profile data 302. Thus, each entity/user may have respective travel parameters associated therewith… Examples of travel parameters include… general locations, specific venues or landmarks…and/or topics of interest – Para 51; for example, the object detection system 212 may detect object(s) in the captured image that relate to travel (e.g., a landmark, a piece of art, a product for sale at a particular venue, etc.) – Para 95; receiving augmentation content from multiple application servers – Fig. 7B – based on messaging server system providing client device information and geolocation and social network information, e.g. user profile data – Para 25), wherein each respective detector of the at least one detector is constrained to detect a subset of objects comprising a respective particular type of object and refrain from detecting objects that do not match the respective particular type of object (i.e. the messaging client 104 determines that the attribute(s) of the object (e.g., object name, venue) and environmental factors (e.g., device geolocation and time) correspond with one or more of the travel parameters. For example, such correspondence may be based on matching (e.g., meeting a threshold comparison value with respect to) time periods, locations, topics and/or activities of the object attribute(s)/environmental factors with those of the travel parameters – Para 97);
transmitting instructions to the AR device, wherein the instructions cause the AR device to execute the at least one detector to detect a matching object in a physical environment visible via the AR device (i.e. messaging system 100 embodies a number of subsystems, which are supported on the client-side by the messaging client 104 and on the server-side by the application servers 114; subsystems include an object detection system – Para 31; Fig. 2);
receiving, from the AR device, an indication of detection of the matching object and matching object metadata (i.e. object detection system sends the messaging client objects attributes that are matched to environmental factors – Para 97; messaging server system transmits data to application servers for retrieval of content – Para 26, 27);
accessing a plurality of requests for AR insertion of AR supplemental content that were received from the plurality of AR insertion requesters (i.e. third party application servers access data associated with messages via a web server processing requests – Para 26);
selecting a selected AR supplemental content based on the plurality of requests for AR insertion of AR supplemental content and the matching object metadata (i.e. based on a number or frequency of confirmed matches between matching search parameters select the supplemental content for display – Para 100; and use application servers for retrieval of content – Para 26, 27 – associated with detected objects – Para 107); and
transmitting an instruction, for the AR device, to display the selected AR supplemental content to appear overlaid over the physical environment proximate to the matching object (i.e. the augmented reality content item may be configured to modify the captured image with augmented reality content (e.g., overlays, visual effects, and the like) that includes the supplemental content provided by the supplemental content system 214 – Para 105; Fig. 6B, 7B).
Claim 2, Drummond discloses the method of claim 1, wherein the accessing the plurality of requests for AR insertion of AR supplemental content that were received from the plurality of AR insertion requesters comprises: transmitting a detection notification to the plurality of AR insertion requesters (i.e. messaging server system transmits data to application servers for retrieval of content – Para 26, 27 – associated with detected objects – Para 107); and receiving a plurality of requests for AR insertion of AR supplemental content from the plurality of AR insertion requesters (i.e. receiving augmentation content from multiple application servers – Fig. 7B).
Claim 3, Drummond discloses the method of claim 1, further comprising: determining whether the indication of detection of the matching object at a corresponding location exceeds a frequency threshold (i.e. based on a number or frequency of confirmed matches between matching search parameters calculate a relevancy score for supplemental content – Para 100); in response to a determination that the indication of detection of the matching object at the corresponding location exceeds a frequency threshold, saving location data for the matching object (i.e. select the supplemental content for display based on the relevancy score – Para 100); and in response to the AR device being within proximity of a location specified by the location data: selecting the selected AR supplemental content based on the plurality of requests for AR insertion of AR supplemental content and the matching object metadata (i.e. I/O components – Para 149 - include biometric components – Para 150 - and environmental components, e.g. proximity sensors – Para 151 to provide output of augmented reality content – Para 106; augmented reality content is provided based on association with attributes/metadata, e.g. coordinates, of detected objects - Para 107, 108; Fig. 7B; Para 25, 99); and transmitting the instruction, for the AR device, to display the selected supplemental content to appear overlaid over the physical environment proximate to the matching object (i.e. the augmented reality content item may be configured to modify the captured image with augmented reality content (e.g., overlays, visual effects, and the like) that includes the supplemental content provided by the supplemental content system 214 – Para 105; Fig. 6B, 7B).
Claim 4, Drummond discloses the method of claim 1, wherein the receiving, from the AR device, the indication of detection of the matching object and matching object metadata is based at least in part on an activity of a user of the AR device based on at least one of a movement pattern or biometric data (i.e. I/O components provide output – Para 149; I/O components include biometric components to detect expressions – Para 150; augmented reality content may correspond to the user’s face – Para 106).
Claim 5, Drummond discloses the method of claim 1, wherein the executing the at least one detector to detect the matching object in the physical environment further comprises:
receiving training data with confirmed data of the matching object (i.e. The object detection system 212 may employ one or more object classifiers to identify objects depicted in a captured image; images may be stored in in a photo library – Para 39; using a machine taught neural network – para 58, 65, 66);
training a machine learning model based on the received training data (i.e. the object detection system 212 is configured to implement or otherwise access object recognition algorithms (e.g., including machine learning algorithms) – Para 40);
detecting a visual representation of the matching object (i.e. scan the captured image, and to detect/track the movement of objects within the image – Para 40); and
inserting the visual representation of the matching object into the trained model (i.e. modify content of a model – Para 58; the augmented reality content item may be configured to modify the captured image with augmented reality content (e.g., overlays, visual effects, and the like) that includes the supplemental content provided by the supplemental content system 214 – Para 105; Fig. 6B, 7B).
Claim 6, Drummond discloses the method of claim 1, wherein the selecting of at least one detector based on matching the user profile data and the AR device data to the plurality of requests further comprises:
determining, for a particular detector, the number of AR insertion requesters requesting the particular detector (i.e. based on a number or frequency of confirmed matches between matching search parameters calculate a relevancy score for supplemental content – Para 100); and in response to the number of AR insertion requesters exceeding a popularity threshold, selecting the particular detector (i.e. select the supplemental content for display based on the relevancy score – Para 100).
Claim 7, Drummond discloses the method of claim 1, wherein matching object metadata comprises at least one of matching object lighting, matching object coordinates, or matching object obstruction level (i.e. augmented reality content is provided based on association with attributes/metadata, e.g. coordinates, of detected objects - Para 107, 108; Fig. 7B; Para 25, 99).
Claim 8, Drummond discloses the method of claim 1, wherein the receiving, from the Augmented Reality (AR) device, the user profile data and the AR device data comprises receiving the user profile data and the AR device data via an AR provider service (i.e. messaging server system provides client device information and geolocation and social network information, e.g. user profile data – Para 25).
Claim 9, Drummond discloses the method of claim 1, wherein the receiving the plurality of requests for AR insertion of AR supplemental content from the plurality of AR insertion requesters comprises receiving the plurality of requests for AR insertion via an AR insertion service (i.e. third party application servers access data associated with messages via a web server processing requests – Para 26).
Claim 10, Drummond discloses the method of claim 1, wherein the AR device data comprises at least one of processing capacity (i.e. processing capacity – Para 24), LIDAR, biometric information (i.e. the messaging system implement an augmentation system – Para 19; the machine, e.g. messaging system, - Fig. 10; Para 146 - includes I/O components – Para 147 - that includes biometric components – Para 150; Fig. 10 “1028”), optical device capacity (i.e. I/O components – Para 150 - include environmental components, such as video capabilities – Para 151), or networking capacity.
Independent claim 11, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Claims 12-19, the corresponding rationale as applied in the rejection of claims 2-10 apply herein.
Independent claim 21, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANTE E HARRISON/Primary Examiner, Art Unit 2615