DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the Amendment filed on 11/10/2025.
Claims 1-20 are pending. Claim 1, 3, 4, 5, 7, 9, 10 has been amended. Claims 11-20 are newly added.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 10, 11-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Woo et al. (US 20110216090 A1, hereinafter "Woo"), in view of Hosein et al. (US 20140046802 A1, hereinafter Hosein) and future in view of Bennett et al. (US 20200273254 A1, hereinafter Bennett).
Regarding Claim 11, Woo teaches a system for presenting [[ ticket ]] information to a user comprising (Woo, Paragraph [0009], “An exemplary embodiment of the present invention provides an interactive augmented reality system including”): [[ eyeglass frames, and ]] at least one processor [[ disposed on the eyeglass frames ]] , the at least one processor enabling operations comprising (Woo, Paragraph [0070], “In order to implement an actual system, a computer with core 2 duo CPU 2.66 GHz and n Vidia GTX 280 is used”):
enabling capture by a portable camera of an image of a ticket object having at least one distinctive image pattern that is not a QR code or a bar code, disposed thereon in an area of recognition (Woo, Paragraph [0045], "The tracking module 310 is to track the camera, and includes a color image camera 311, a feature detector 312, a pattern/model recognizing unit ... " <read on portable camera capture and recognizing an image pattern via detected features rather than barcode decoding>); recognizing the at least one distinctive image pattern from the captured image of the [[ ticket ]] object in real time without requiring recognition outside the area of recognition; (Woo, Paragraph [0043], "The image analyzing process is a process of finding a feature <read on distinctive image pattern > from an acquired image. To this end, the image analyzing process performs feature tracking ... the 3D reference coordination system is made on the basis of the feature found through the image analyzing process ... " [0063], “The position of the camera is computed in real time by using information of the 3D restored miniature”; [0072], “Plane information in real space is extracted from 3D features”; it is noted since the feature is found within the image so it is not without requiring recognition outside the area); associating, in real time, the recognized [[ ticket ]] object with a record in response to the recognizing; (Woo, Paragraph [0076], “The real-time camera tracking is a process of tracking 3D-2D matching relationship in real time” “SIFT (scale invariant feature transform) features are detected from them. 3D coordinates of the features and a SIFT descriptor for matching are generated by image-based 3D restoration technology”); selecting, in real time, an interactive media item in response to the associating; and (Woo, Paragraph (0043], "In the analyzing process, according to user input signals, corresponding contents ... are selected from among the prepared contents. The kinds of contents include 3D models, sounds, images, videos, etc."; [0082], “a method of implementing the interaction in augmented reality by using the interactive augmented reality system”); superimposing, in real time, the selected interactive media item onto a display of the captured image or an image derived therefrom (Woo, Paragraph [0043], "In this case, the selected contents are superimposed at positions determined according to the coordinate systems made on the basis of the features found from the images.").
However, Woo does not explicitly disclose the object is ticket object.
But Hosein teaches enabling operations comprising: (Hosein, Paragraph [0046], "prompt to scan or take a picture of a code printed on their ticket ... using scanning/camera component 220." [0052], "using scanning/camera <read on portable camera> component 220 of client mobile device 102. The seat may also be captured by capturing one or more images of the ticket and performing image recognition on the captured one or more images to determine the seat"); enabling capture by a portable camera of an image of a ticket object having at least one distinctive image pattern that is not a QR code or a bar code, disposed hereon in an area of recognition (Hosein, Paragraph [0050], "apply one or more image recognition or optical character recognition algorithms on one or more captured images of the event ticket"; it is noted since it’s is using image recognition, hence it is not using QR code or bar code); recognizing the at least one distinctive image pattern from the captured image of the ticket object in real time (Hosein, Paragraph [0050], “apply one or more image recognition or optical character recognition algorithms on one or more captured images of the event ticket or location marker to recognize the user's seat location”); associating, in real time, the recognized ticket object with a record in response to the recognizing (Hosein, Paragraph [0050], [0052], “The seat may also be captured by capturing one or more images of the ticket and performing image recognition on the captured one or more images to determine the seat” “processing component 206 may apply one or more image recognition or optical character recognition algorithms on one or more captured images of the event ticket or location marker to recognize the user's seat location”).
Hosein and Woo are analogous since both of them are dealing with processing data in augmented reality environment. Woo provided a way of feature-based recognition from captured images and superimposing selected. Hosein provided a way of dealing with object identification especially from scanned ticket by using smartphone in the augmented reality environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate ticket-image recognition/record association from mobile device taught by Hosein into modified invention off Woo such that when dealing with superimposing data in the augmented reality environment, system will be able to extend to multiple fields including ticket which enhance the capability of the system and more user can be benefit from the expanded ability of the augmented reality system across multiple area of technology.
The combination does not explicitly disclose but Bennett teaches system comprising eyeglass frames and at least one processor disposed on the eyeglass frames, the at least one processor enabling operations (Bennett, Paragraph [0043],
“the computing device 10 is a head-mounted display (HMD) device. The illustrated computing device 10 takes the form of a wearable visor, but it will be appreciated that other forms are possible, such as glasses or goggles, among others”; [0044], “the computing device 10 may include a logic subsystem 462… The logic subsystem 462 may include one or more processors 432 configured to
execute software instructions”; [0045], “the processor 432 of the computing device 10 is operatively coupled to the display panels… and to other display-system componentry”).
Bennett and Woo are analogous since both of them are dealing with presenting augmented/mixed reality content to a user based on camera-captured images. Woo provided a way of recognizing patterns from acquired images and superimposing selected contents. Bennett provided a way of implementing mixed reality presentation using a head-worn eyewear device. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate head-worn eyewear form factor taught by Bennett into modified invention of Woo such that the system comprises eyeglass frames and performs the ticket-based recognition/record association and overlay presentation which provide more user friendly and enhance the system capability allow user to enjoy more augmented reality experience.
Regarding Claim 1, it recites limitations similar in scope to the limitations of Claim 11 but as a method and the combination of Woo, Hosein and Bennett teaches all the limitations as of Claim 11. Therefore is rejected under the same rationale.
Regarding Claim 2, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination further teaches wherein the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality (Woo, Paragraph [0009], “An exemplary embodiment of the present invention provides an interactive augmented reality system”; [0045], “The contents module 330 relates to the space information for implementing the interaction between the real environment and the virtual contents, and includes a mixed reality simulator”).
Bennett teaches the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality (Bennett, Paragraph [0089], “an operator 680 is wearing mixed-reality (MR) device 601. Mixed-reality device 601 is an example of hologram device 501 that is a wearable, head-mounted display mixed-reality device. Via MR device 601, in the example illustrated in FIG. 6, the operator can see step card 671, picture 672, 3D hologram 673, and tether 674, all superimposed on a real-world environment” [0002], “mixed reality takes place not only in the physical world or the virtual world, but includes a mix of elements from reality and virtual reality, encompassing both augmented reality and augmented virtuality via immersive technology”).
Bennett and Woo are analogous since both of them are dealing with processing data in augmented reality environment. Woo provided a way of recognized object from image and superimposing action button on the image when dealing with data in the augmented reality environment. Bennett provided a way of recognized object from image and superimposing action button on the image when dealing with data in the not only mixed reality environment but also augmented reality and virtual environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate multiple environments taught by Bennett into modified invention off Woo such that when dealing with data in the three dimension world, system will be able to running not only just augmented reality environment but also support multiple environments like mixed rality, virtual reality as well which enhance the capability of the system and provide more user friendly experience.
Regarding Claim 3, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination further teaches wherein the recognition comprises recognizing a two-dimensional or three-dimensional [[ ticket ]] object with print on it (Woo, Paragraph [0043], "The image analyzing process is a process of finding a feature from an acquired image. To this end, the image analyzing process performs feature tracking ... the 3D reference coordination system is made on the basis of the feature found through the image analyzing process ... "; Paragraph [0045], "The tracking module 310 is to track the camera, and includes ... a feature detector 312, a pattern/model recognizing unit").
However, Wan does not explicitly disclose ticket [[ object with print on it ]].
But Hosein teaches recognizing a two-dimensional or three-dimensional ticket object with at least one distinctive image pattern printed on it (Hosein, Paragraph [0046], “ scan or take a picture ... printed on their ticket ... using scanning/camera component 220”; [0047], “User 108 may then be prompted to scan or take a picture of a code printed on their ticket, or on a venue or event marker”; it is noted the code on the ticket are two dimensional printed object)
As explained in rejection of claim 1, the obviousness for combining of ticket object of Hosein into Wan is provided above.
Regarding Claim 4, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination further teaches wherein the ticket object has at least one distinctive image pattern printed thereon (Woo, Paragraph [0045], "The tracking module 310 is to track the camera, and includes a color image camera 311, a feature detector 312, a pattern/model recognizing unit ... " <read on portable camera capture and recognizing an image pattern via detected features rather than barcode decoding>);
, and the recognizing comprises recognizing the printed at least one distinctive image pattern (Woo, Paragraph [0043], "The image analyzing process is a process of finding a feature <read on distinctive image pattern > from an acquired image. To this end, the image analyzing process performs feature tracking ... the 3D reference coordination system is made on the basis of the feature found through the image analyzing process");
Regarding Claim 5, the combination of Woo, Hosein and Bennett teaches the invention in Claim 4.
The combination further teaches wherein the recognizing further includes recognizing characters printed on the ticket object (Hosein, Paragraph [0050], “processing component 206 may apply one or more image recognition or optical character recognition algorithms on one or more captured images of the event ticket or location marker to recognize the user's seat location”)
As explained in rejection of claim 1, the obviousness for combining of ticket object of Hosein into Wan is provided above.
Regarding Claim 6, the combination of Woo, Hosein and Bennett teaches the invention in Claim 5.
The combination further teaches wherein the ticket object comprises a patch of printed material attached to an associated item (Hosein, Paragraph [0011], “an interface for browsing and purchasing items associated with the event or venue using the enhanced user experience application” [0045], user 108 may be able to scan a code, such as a bar code or a Quick Response (QR) code printed on their ticket, or a code on a venue marker near them, such as on a back of a seat or a lamppost in the venue or event, using a scanning/camera component 220 to provide the vendor the user's seat information for delivery of the ordered concessions to the user's seat. Alternatively, the scanning application may apply image recognition or optical character recognition on the scanned ticket to recognize the user's seat location; [0046], "scan or take a picture of a code printed on their ticket or on a venue or event marker, such as a code placed on a nearby seat or lamppost”).
As explained in rejection of claim 1, the obviousness for combining of ticket object of Hosein into Wan is provided above.
Regarding Claim 7, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination further teaches wherein the selected interactive media item comprises a digital overlay that leads to specific action selected from the group consisting of providing specific information; a video, tutorial, or any kind of displayable content (Woo, Paragraph [0042], “implementing interactive augmented reality. Among them, in particular, the user input information acquiring process and the analyzing process are directly related to an interactive function.” [0043], "The kinds of contents include 3D models, sounds, images, videos <read on interactive media>, etc. In the applying process, matching between the actual images and the contents is performed. In this case, the selected contents are superimposed at positions determined according to the coordinate systems made on the basis of the features found from the images"; [0044], “If the interactive augmented reality system 100 is generally divided into four constituent elements, it can be divided into a tracking module 310, an interaction module 320, a contents module 330, and a miniature AR module”).
Regarding Claim 8, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination further teaches the superimposing is performed (Woo, Paragraph [Paragraph [0043], "the selected contents are superimposed at positions determined according to the coordinate systems made on the basis of the features found from the images").
Woo does not explicitly disclose but Hosein teaches wherein the superimposing is performed on a handheld display device, a user's retina or smart glasses (Hosein, Paragraph [0033], "the enhanced user experience is presented on the client mobile device <read on handheld display device").
As explained in rejection of Claim 1, the obviousness for combining the handheld display device of Hosein into Woo is provided above.
Regarding Claim 10, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination further teaches displaying any or all of the following action buttons in any combination or subcombination (Woo, Paragraph [0042], “The interactive augmented reality system 100 sequentially performs a camera image acquiring process… an outputting process, etc.”): [[ Price Tag, ]] Photo Gallery, Videos, (Woo, Paragraph [0043], "The kinds of contents include 3D models, sounds, images, videos, etc <read on Photo Gallery and Videos>); [[ Description, Call, Mail, Shop link merchandise, Explanation, Intro Social Media links, Map, Discount Code, Reviews, Directions, Booking Opportunities, and/or Seat information ]].
Woo does not explicitly disclose but Hosein teaches Shop link merchandise, Map (Hosein, Paragraph [0038], "include a link to a store ... or shop ... Selecting the link ... may provide ... merchandise available for purchase ... "; Paragraph [0040], "map ... link which may provide a map and directory of the ... ").
As explained in rejection of Claim 1, the obviousness for combining ticket-based enhanced experience of Hosein into Woo is provided above.
Regarding Claim 12, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale.
Regarding Claim 13, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale.
Regarding Claim 14, it recites limitations similar in scope to the limitations of Claim 4 and therefore is rejected under the same rationale.
Regarding Claim 15, it recites limitations similar in scope to the limitations of Claim 5 and therefore is rejected under the same rationale.
Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 6 and therefore is rejected under the same rationale.
Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 7 and therefore is rejected under the same rationale.
Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 8 and therefore is rejected under the same rationale.
Regarding Claim 20, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale.
Claim(s) 9, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Woo et al. (US 20110216090 A1, hereinafter "Woo"), in view of Hosein et al. (US 20140046802 A1, hereinafter Hosein) and future in view of Bennett et al. (US 20200273254 A1, hereinafter Bennett) as applied to Claim 1 above and future in view of Carre et al. (US 20150348329 A1, hereinafter Carre).
Regarding Claim 9, the combination of Woo, Hosein and Bennett teaches the invention in Claim 1.
The combination does not explicitly disclose but Carre teaches the selected interactive media item comprises a call button (Carre, Paragraph [0044], The ghost image is overlaid in the camera preview of the application…[0039], The content…might also include a transparent user interface that proposes one or more choices of actions or provides actionable buttons to the user…[0040], Augmented Reality Event (ARE) including data for the application to interact with the user…an instruction for a user action…a telephone number to call).
Carre and Woo are analogous since both of them are dealing with processing data in augmented reality environment. Woo provided a way of recognized object from image and superimposing action button on the image when dealing with data in the augmented reality environment. Carre provided a way of overlaying call button on the image while dealing with objects on the image in the augmented reality environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate overlaying call button taught by Carre into modified invention off Woo such that when dealing with data in the augmented reality environment, system will be able to provide phone call button which provide more user friendly user interface when using the system.
Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale.
Response to Arguments
The rejection of Claims 1-10 under Nonstatutory Double Patenting are withdrawn in view of Applicant filed eTerminal/Terminal Disclaimer on 11/10/2025.
Applicant’s arguments with respect to claim 1, filed on 11/10/2025, with respect to rejection under 35 USC § 103 in regard to prior art does not teaches the limitation(s) have been considered but are moot in view of the new ground(s) of rejection.
In regard to Claims 2-10, they directly/indirectly depends on independent Claim 1. Applicant does not argue anything other than the independent claim 1. The limitations in those claims in conjunction with combination previously established as explained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20080182644 A1 Game apparatus for displaying information about a game
US 20150356812 A1 System and method for augmented reality using a player card
US 20150302665 A1 Triangulation of points using known points in augmented or virtual reality systems
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YuJang Tswei/Primary Examiner, Art Unit 2614