Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
DETAILED ACTION
Claims 26-47 are pending in this application and have been examined in response to application filed on 07/30/2025.
CONTINUING DATA
This application is a CON of 18/762,589 07/02/2024 PAT 12394164
18/762,589 is a DIV of 18/217,942 07/03/2023 PAT 12148110
18/217,942 is a CON of 18/079,799 12/12/2022 PAT 11741681
18/079,799 is a CON of 16/914,242 06/26/2020 PAT 11551424
16/914,242 is a CON of 16/119,857 08/31/2018 PAT 10699487
16/119,857 is a CON of 15/648,411 07/12/2017 PAT 10068384
15/648,411 is a CON of 13/709,618 12/10/2012 PAT 9728008
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for a patent.
Claim 26-28, 31-33, 35-40 and 42-47 are rejected under pre-AIA 35 U.S.C. 102(a) as being unpatentable by Flynn et al. (US 2012/0290591 A1).
As to INDEPENDENT claim 26, Flynn discloses a computer-based real-world object annotation system comprising: an interaction database (fig.2; “232” a tagging database is used to stored virtual tags associated with real-world objects) storing one or more interaction objects associated with one or more corresponding real-world objects, wherein each interaction object comprises: one or more media features derivable from sensor data relating to the one or more corresponding real-world objects, and location data associated with the one or more corresponding real-world objects (fig.2, “212”; media features are captured by the image capture module (camera) and the location data is captured by the GPS module);
at least one non-transitory computer readable memory storing software instructions (fig.2, “212”; software instructions are stored in the user device); and at least one processor (fig.11, “processor”) coupled with the at least one non-transitory computer readable memory and the interaction database, wherein the at least one processor performs the following operations upon execution of the software instructions:
receiving, from a first mobile device, a digital representation of a real-world object captured by at least one sensor of the first mobile device ([0038]; the user device captures an image of a real-world object);
determining a current location of the first mobile device ([0038; current location of the user device is captured by the GPS module);
generating a set of media features from the digital representation ([0028]; [0083]; media features of the real world objects are captured by a camera);
creating a virtual interaction instance as an annotation associated with the real-world object based on at least the current location and at least a part of the set of media features ([0040], [0083]; object location, object features and object annotation are associated with a real world object);
storing the virtual interaction instance in the interaction database as a new interaction object of the annotation ([0053]; virtual instance is stored in a database); and
providing access to the new interaction object of the annotation to one or more other mobile devices, based at least on the location of the other mobile device relative to the current location and a matching of media features derived from a subsequent digital representation of the real-world object captured by the other mobile device ([0046], [0060], [0083]; other devices can access the location-based interaction objects).
As to claim 27, Flynn discloses wherein content of the new interaction object of the annotation comprises at least one of text content, image content, video content, or audio content (fig.13; different types annotations can be associated with a real-world object).
As to claim 28, Flynn discloses wherein the at least one sensor comprises a camera and the digital representation comprises image data ([0050]; a camera is usable for capturing image data).
As to claim 31, Flynn discloses wherein the new interaction object comprises a game object element associated with the real-world object ([0071], [0072]; user can turn the interaction object into a gaming element by annotating the object with challenge tags).
As to claim 32, Flynn discloses wherein the operations further include establishing a virtual world wiki (VWW) comprising multiple new interaction objects associated with multiple real-world objects ([0066]; wiki pages are tagged to real-world objects).
As to claim 33, Flynn discloses wherein the operation of providing access to the new interaction object includes activating the new interaction object when a second mobile device is within a predetermined distance from the current location ([0073]; interactions are triggered based on the location of the object).
As to claim 35, Flynn discloses wherein the operations further include binding the new interaction object to a user (the object author can specific which users can interact with the object).
As to claim 36, Flynn discloses wherein the operations further include enabling the other mobile device to access multiple new interaction objects simultaneously through different modalities of interaction (fig.13; different interaction modalities such as listening to an audio tag or watching a video tag are simultaneously available to select).
As to claim 37, Flynn discloses wherein the operations further include instantiating the new interaction object according to different formats based on user capabilities comprising at least one of visual capabilities or audio capabilities (fig.13; the user can select an audio or a video representation of the real-world object).
As to claim 38, Flynn discloses wherein the operations further include customizing the new interaction object based on at least one of: a time of day, a user preference, a device capability, or a historical user behavior ([0087]; user specified filters are applied).
As to claim 39, Flynn discloses wherein the operations further include processing the digital representation in substantially real-time to provide access the new interaction object ([0053]; objects are accessible in real-time).
As to claim 40, Flynn discloses wherein the operations further include establishing a geo-fence around the current location; and making the new interaction object of the annotation available to mobile devices within the geo-fence ([0073]; object interactions are location specified).
As to claim 42, Flynn discloses wherein the operations further include synchronizing access to the new interaction object among points of interactions ([0073]; access to the coupon tags is synchronized with a specified location).
As to claim 43, Flynn discloses wherein the operations further include arranging, placing or curating multiple new interaction objects in a physical space according to corresponding location data (fig.13; [0073]; multiple tag objects can be applied to a real-world object).
As to claim 44, Flynn discloses wherein the operations further include processing multiple channels of the digital representation in parallel, wherein the channels comprise at least two of: video data, image data, metadata, audio data, or chat data (fig.3; different media tags are shown).
As to claim 45, Flynn discloses wherein the operations further include maintaining persistence of the new interaction object across multiple user sessions while restricting access to the new interaction object based on at least one of temporal constraints or geographic constraints ([0073]; objects are location constrained).
As to INDEPENDENT claim 46, see rationale addressed in the rejection of claim 26 above.
As to INDEPENDENT claim 47, see rationale addressed in the rejection of claim 26 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 29 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Flynn in view of Dune (US 2012/0275705 A1).
As to claim 29, Flynn does not expressly disclose wherein the media features comprise scale-invariant feature transform (SIFT) features.
In the same field of endeavor, Dune discloses wherein the media features comprise scale-invariant feature transform (SIFT) features ([0009], [0010]).
It would have been obvious to one of ordinary skill in the art, having the teaching of Flynn and Dune before him prior to the effective filling date, to modify the real-world object tagging interface taught by Flynn to include imaging searching using SIFT taught by Dune with the motivation being to detect characteristic key points and to identify type and positions within in the captured image.
Claim 30 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Flynn in view of Roy et al. (US 9,473,730 B1).
As to claim 30, Flynn does not expressly disclose wherein the operations further include: tracking historical interaction patterns of one or more users with the new interaction object; and customizing availability of the new interaction object based on the historical interaction patterns.
In the same field of endeavor, Roy discloses tracking historical interaction patterns of one or more users…; and customizing availability of the new … based on the historical interaction patterns (col.14, l.32-45; contents are customized and personalized based on user interaction history).
It would have been obvious to one of ordinary skill in the art, having the teaching of Flynn and Roy before him prior to the effective filling date, to modify the real-world object tagging interface taught by Flynn to include personalized contents taught by Roy with the motivation being to enhance usability by tailoring different options for different user groups.
Claim 34 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Flynn in view of Abu-Hakima et al. (US 2010/0199188 A1).
As to claim 34, Flynn does not expressly disclose wherein the operations further include customizing the new interaction object based on capabilities of receiving mobile devices.
In the same field of endeavor, Abu-Hakima discloses customizing contents based on capabilities of receiving mobile devices ([0245]; content is customized based on the physical capabilities of the device).
It would have been obvious to one of ordinary skill in the art, having the teaching of Flynn and Abu-Hakima before him prior to the effective filling date, to modify the real-world object tagging interface taught by Flynn to include device appropriate content delivery taught by Abu-Hakima with the motivation being to share
Claim 41 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Flynn in view of DeBlaey et al. (US 2009/0282366 A1).
As to claim 41, Flynn does not expressly disclose wherein the operations further include providing advanced interaction options for the new interaction object based on a determined sophistication level of a user derived from historical interactions.
In the same field of endeavor, DeBlaey discloses providing advanced interaction options for the new interaction object based on a determined sophistication level of a user derived from historical interactions ([0004], [0020]; menu options are tailored based on past user interactions).
It would have been obvious to one of ordinary skill in the art, having the teaching of Flynn and DeBlaey before him prior to the effective filling date, to modify the real-world object tagging interface taught by Flynn to include expert menu items taught by DeBlaey with the motivation being to enhance usability by tailoring different options for different user groups.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAOSHIAN SHIH whose telephone number is (571)270-1257. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FRED EHICHIOYA can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAOSHIAN SHIH/Primary Examiner, Art Unit 2179