DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the Amendment filed on 9/8/2025.
Claims 1-20 are pending. Claims 1, 3, 9, 10 have been amended. Claims 11-20 are newly added.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-10 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of app 17/689,931. Although the claims at issue are not identical, they are not patentably distinct from each other because they both claim the same subject matters and limitations as explained below.
Claim 1 is determined to be obvious in light of claim 1 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 claim 1
17/689,931 claim 1
1. A method of presenting stadium seat information to a user comprising: enabling capture by a portable camera of an image of a stadium seat object having indicia disposed thereon; recognizing the indicia from the captured image; associating the recognized stadium seat object with a record in response to the recognizing; selecting a interactive media item in response to the recognition and/or the associating; and superimposing the selected interactive media item onto a display of the captured image or an image derived therefrom.
1. A method of presenting ticket information to a user characterized by: enabling capture by a portable camera of an image of a ticket object having indicia disposed thereon; recognizing the indicia from the captured image; associating the recognized ticket object with a record in response to the recognizing; selecting a media item in response to the recognition and/or the associating; and superimposing a selected media item onto a display of the captured image or an image derived therefrom.
Claim 2 is determined to be obvious in light of claim 2 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 claim 2
17/689,931 claim 2
2. The method of claim 1 wherein the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality.
2. The method of claim 1 wherein the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality.
Claim 3 is determined to be obvious in light of Claim 3 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 3
17/689,931 Claim 3
3. The method of claim 1 wherein the recognition comprises recognizing a three-dimensional stadium seat object with print on it.
3. The method of claim 1 wherein the recognition comprises recognizing a two-dimensional or three-dimensional ticket object with print on it.
Claim 4 is determined to be obvious in light of Claim 4 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 4
17/689,931 Claim 4
4. The method of claim 1 wherein the stadium seat object has indicia printed thereon, and the recognizing comprises recognizing at least some of the printed indicia.
4. The method of claim 1 wherein the ticket object has indicia printed thereon, and the recognizing comprises recognizing at least some of the printed indicia.
Claim 5 is determined to be obvious in light of Claim 5 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 5
17/689,931 Claim 5
5. The method of claim 4 wherein the recognizing includes recognizing characters printed on the stadium seat object.
5. The method of claim 4 wherein the recognizing includes recognizing characters printed on the ticket object.
Claim 6 is determined to be obvious in light of Claim 6 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 6
17/689,931 Claim 6
6. The method of claim 5 wherein the stadium seat object comprises a patch of printed material attached to an associated item.
6. The method of claim 5 wherein the ticket object comprises a patch of printed material attached to an associated item.
Claim 7 is determined to be obvious in light of Claim 7 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 7
17/689,931 Claim 7
7. The method of claim 6 wherein the selected media item comprises a digital overlay that leads to specific action selected from the group consisting of providing specific information; a video, tutorial, or any kind of displayable content.
7. The method of claim 6 wherein the selected media item comprises a digital overlay that leads to specific action selected from the group consisting of providing specific information; a video, tutorial, or any kind of displayable content.
Claim 8 is determined to be obvious in light of Claim 8 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 8
17/689,931 Claim 8
8. The method of claim 1 wherein the superimposing is performed on a handheld display device, a user's retina or smart glasses.
8. The method of claim 1 wherein the superimposing is performed on a handheld display device, a user's retina or smart glasses.
Claim 9 is determined to be obvious in light of Claim 9 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 9
17/689,931 Claim 9
9. The method of claim 1 wherein the selected media item comprises a call button to call a service to the stadium seat object.
9. The method of claim 1 wherein the selected media item comprises a call button.
Claim 10 is determined to be obvious in light of Claim 10 of 17/689,931 based on reasons below for having similar limitations.
17/689,929 Claim 10
17/689,931 Claim 10
10. The method of claim 1 further including displaying any or all of the following action buttons in any combination or subcombination: Price Tag, Photo Gallery, Videos Description, Call, Mail, Shop link to buy merchandise or tickets, Explanation, Intro, Social Media links, Map, Discount Codes, Reviews, Tutorials, Previews, Order food and/or Booking opportunities.
10. The method of claim 1 further including displaying any or all of the following action buttons in any combination or subcombination: Price Tag Photo Gallery Videos Description Call Mail Shop link merchandise Explanation Intro Social Media links Map Discount Codes Reviews Directions Booking opportunities Seat information.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-8, 10, 11, 13-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wan (US 20140111542 A1), in view of Nurmi (US 20120221241 A1)
Regarding Claim 1, Wan teaches a method of presenting [[ stadium seat ]] information to a user comprising (Wan, Paragraph [0065], “These advantages improve the user experience and enable further information to be retrieved relating to the user's present visual environment”):
enabling capture by a portable camera of an image of a [[ stadium seat ]] object (Wan, Paragraph [0013], “a mobile application executed by the mobile device, the mobile application including: a display module for displaying the live video feed on a screen of the mobile device”);
recognizing the indicia from the captured image (Wan, Paragraph [0013], “a content retrieval module for retrieving the associated content <read on object>”); associating the recognized [[ stadium seat ]] object with a data record in response to the recognizing (Wan, Paragraph [0013], “a content retrieval module for retrieving the associated content by querying the database based on the machine-encoded text converted by the OCR engine” Paragraph [0100], When the OCR engine 32 has detected text 41 in the live video feed 49, it converts (183) it into machine-encoded text and a query (184) on the database 35, 51 is performed… The database query matches (185) a unique result in the database 35, 51, and the associated AR content 40 is retrieved);
selecting an interactive media item in response to the recognition and/or the associating (Wan, Paragraph [0084]-[0086], “the AR content 40 is a menu of buttons 40A, 40B, 40C <read on interactive media item> as depicted in FIG. 4 displayed within a border 40 positioned proximal to the detected text 41 in the live video feed 49”; if the "Reviews" button 40A is pressed, the web page that is automatically opened … which is web page containing user reviews of the restaurant on the Open Rice web site”; it is noted since the menu of buttons is not preloaded, it is dynamically retrieved and presented in direct response to the recognition result and since it will trigger different actions, it is interactive media item);
and superimposing the selected interactive media item onto a display of the captured image or an image derived therefrom (Wan, Paragraph [0014], “wherein the retrieved associated content <read on interactive media item> is superimposed in the form of Augmented Reality (AR) content on the live video feed using the display module”).
PNG
media_image1.png
528
642
media_image1.png
Greyscale
However, Wan does not explicitly disclose the object is stadium seat object.
But Nurmi teaches presenting stadium seat information to a user comprising: enabling capture by a portable camera of an image of a stadium seat object (Nurmi, Paragraph [0038], [0045], [0067], “The UE 101 is any type of mobile terminal, fixed terminal, or portable terminal”. “The UE 101 may also execute an application 109 (e.g., a camera application or other imaging application) that can capture images”; “depicts a previously captured image 561 of stadium seating”),
recognizing the stadium seat object from the captured image (Nurmi, Paragraph [0025], [0056], [0067], “depicts a previously captured image 561 of stadium seating. In this example, the stadium is equipped with an indoor positioning system with accuracy down to the seat level” “the term image media refers to pictures, videos, renderings (e.g., augmented reality renderings, virtual reality renderings), virtual worlds, and/or any other graphical depictions of one or more locations” “a process for recognizing objects in media content” );
associating the recognized stadium seat object with a data record in response to the recognizing (Nurmi, Paragraph [0067], “depicts a previously captured image 561 of stadium seating” [0056], “a process for recognizing objects in media content” [0018], “a process for associating renderings of route information rendering to image media” [0059], “the user can be presented with a rendering of any routes (e.g., the user's routes) that are associated with the location in the camera's field of view”).
Nurmi and Wan are analogous since both of them are dealing with processing data in augmented reality environment. Wan provided a way of recognized object from image and superimposing action button on the image when dealing with data in the augmented reality environment. Nurmi provided a way of dealing with object identification especially stadium seat object by using smart device in the augmented reality environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate object identification like stadium seat taught by Nurmi into modified invention off Wan such that when dealing with superimposing data in the augmented reality environment, system will be able to extend to multiple fields including stadium seat world which enhance the capability of the system and more user can be benefit from the expanded ability of the augmented reality system.
Regarding Claim 4, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further teaches wherein the stadium seat object has indicia printed thereon, and the recognizing comprises recognizing at least some of the printed indicia (Wan, Paragraph [0076], [0077], “The text markers 80 in the live video feed 49 for detection by the OCR engine 32 may be found on printed or displayed matter 70, for example, outdoor advertising, shop signs, advertising in printed media, or television or dynamic advertising light boxes” “a platform 10 for recognising text using mobile devices 20 with a built-in device video camera 21 and automatically retrieving associated content based on the recognised text is provided”).
Regarding Claim 5, the combination of Wan and Nurmi teaches the invention in Claim 4.
The combination further teaches wherein the recognizing includes recognizing characters printed on the [[ stadium seat ]] object (Wan, Paragraph [0010], “there is provided a platform for recognising text using mobile devices with a built-in device video camera and automatically retrieving associated content based on the recognized text”).
However, Wan does not explicitly disclose [[ recognizing characters printed on the ]] stadium seat [[ object ]].
But Nurmi teaches recognizing characters printed on the stadium seat object (Nurmi, Paragraph [0067], “depicts a previously captured image 561 of stadium seating” [0025], “the images can be presented in two-dimensions or three-dimensions” [0067], “The route rendering platform 103 can interpret the stationary status as indicating that the seat is most likely the user's seat within the stadium and marks that seat with a star symbol” [0070], “digital data that is used to represent a number or code for a character”)
As explained in rejection of claim 1, the obviousness for combining of stadium seat object of Nurmi into Wan is provided above.
Regarding Claim 6, The combination of Wan and Nurmi teaches the invention in Claim 5.
Wan does not explicitly disclose but Nurmi teaches wherein the stadium seat object comprises a printed material attached to an associated item (Nurmi, Paragraph
[0029], “the system 100 can attach the route information or rendering the image media directly as metadata” “the route information or rendering may be maintained in as a separate file, and then the file can be associated (e.g., via a timestamp, index, etc.) with the corresponding image media”).
As explained in rejection of claim 1, the obviousness for combining of stadium seat object of Nurmi into Wan is provided above.
Regarding Claim 7, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further teaches wherein the selected media item comprises a digital overlay that leads to specific action selected from the group consisting of providing specific information; a video, tutorial, or any kind of displayable content (Wan, Paragraph [0084], “the AR content 40 is a menu of buttons 40A, 40B, 40C as depicted in FIG. 4 displayed within a border 40 positioned proximal to the detected text 41 in the live video feed 49”).
Regarding Claim 8, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further teaches wherein the superimposing is performed on a handheld display device, a user's retina or smart glasses (Wan, Paragraph [0013], a mobile application executed by the mobile device <read on handheld display device>, the mobile application including: a display module for displaying the live video feed on a screen of the mobile device; and a content retrieval module for retrieving the associated content by querying the database based on the machine-encoded text converted by the OCR engine; [0014] wherein the retrieved associated content is superimposed in the form of Augmented Reality (AR) content on the live video feed using the display module).
Regarding Claim 10, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further teaches displaying any or all of the following action buttons in any combination or subcombination (Wan, Paragraph [0084], “the AR content 40 is a menu of buttons 40A, 40B, 40C as depicted in FIG. 4 displayed within a border 40 positioned proximal to the detected text 41 in the live video feed 49”):
[[ Price Tag Photo Gallery Videos Description Call Mail Shop link to buy merchandise or tickets Explanation Intro Social Media links Map ]] Discount Codes (Wan, Figure 1, Element 40B Discounts button <read on Discount code> Paragraph [0084], When a button 40A, 40B, 40C is pressed by the user, at least one web page is opened automatically; Paragraph [0136], AR content 40 for Discount will link to AR link Groupon, Credit Card Discounts…) Reviews (Wan, Figure 1, Element 40A review button, Paragraph [0084], “if the “Reviews’ button 40A is pressed, the web page that is automatically opened” “which is web page containing user reviews of the restaurant”) [[ Tutorials Previews Order food Booking opportunities]].
Regarding Claim 11, it recites limitations similar in scope to the limitations of claim 1, but in a system. As shown in the rejection, the combination of Wan and Nurmi disclose the limitations of claims 1. Additionally, Wan discloses an system that maps to Fig. 1 and Paragraph [0008], [0054], [0080], (Wan, Paragraph [0008], [0080], “The mobile application 30 is run on a mobile operating system” “a processor to execute computer-readable instructions to perform” “A mobile application called Google™ Goggles analyses a still image captured by a camera phone”). Thus, Claim 11 is met by Wan according to the mapping presented in the rejection of claims 1, given the method corresponds to the system.
Regarding Claim 14, it recites limitations similar in scope to the limitations of Claim 4 and therefore is rejected under the same rationale.
Regarding Claim 15, it recites limitations similar in scope to the limitations of Claim 5 and therefore is rejected under the same rationale.
Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 6 and therefore is rejected under the same rationale.
Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 7 and therefore is rejected under the same rationale.
Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 8 and therefore is rejected under the same rationale.
Regarding Claim 20, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale.
Claim(s) 2, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wan (US 20140111542 A1), in view of Nurmi (US 20120221241 A1) as applied to Claim 1 above and future in view of Bennett et al. (US 20200273254 A1, hereinafter Bennett).
Regarding Claim 2, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further teaches wherein the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality (Wan, Paragraph [0014], “wherein the retrieved associated content is superimposed in the form of Augmented Reality (AR) content on the live video feed using the display module”; it is noted ).
However, The combination does not explicitly disclose [[ superimposing comprises using at least one of augmented reality, ]] mixed reality and virtual reality.
But Bennett teaches the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality (Bennett, Paragraph [0089], “an operator 680 is wearing mixed-reality (MR) device 601. Mixed-reality device 601 is an example of hologram device 501 that is a wearable, head-mounted display mixed-reality device. Via MR device 601, in the example illustrated in FIG. 6, the operator can see step card 671, picture 672, 3D hologram 673, and tether 674, all superimposed on a real-world environment” [0002], “mixed reality takes place not only in the physical world or the virtual world, but includes a mix of elements from reality and virtual reality, encompassing both augmented reality and augmented virtuality via immersive technology”).
Bennett and Wan are analogous since both of them are dealing with processing data in augmented reality environment. Wan provided a way of recognized object from image and superimposing action button on the image when dealing with data in the augmented reality environment. Bennett provided a way of recognized object from image and superimposing action button on the image when dealing with data in the not only mixed reality environment but also augmented reality and virtual environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate multiple environments taught by Bennett into modified invention off Wan such that when dealing with data in the three dimension world, system will be able to running not only just augmented reality environment but also support multiple environments like mixed rality, virtual reality as well which enhance the capability of the system and provide more user friendly experience.
Regarding Claim 12, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale.
Claim(s) 3, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wan (US 20140111542 A1), in view of Nurmi (US 20120221241 A1) as applied to Claim 1, 11 above respectively and further in view of Neumann et al. (US 20190220665 A1, hereinafter Neumann).
Regarding Claim 3, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further teaches wherein the recognition comprises recognizing a three-dimensional [[ stadium seat ]] object with a seat number printed on it (Wan, Paragraph [0077], “The text markers 80 in the live video feed 49 for detection by the OCR engine 32 may be found on printed or displayed matter 70, for example, outdoor advertising, shop signs, advertising in printed media, or television or dynamic advertising light boxes”).
However, Wan does not explicitly disclose stadium seat [[ object with a seat number on it ]].
But Nurmi teaches recognizing a [[ three-dimensional ]] stadium seat object with [[ a seat number ]] an object printed on it (Nurmi, Paragraph [0067], “depicts a previously captured image 561 of stadium seating” [0025], “the images can be presented in two-dimensions or three-dimensions” [0067], “The route rendering platform 103 can interpret the stationary status as indicating that the seat is most likely the user's seat within the stadium and marks that seat with a star symbol <read on object with print>”)
As explained in rejection of claim 1, the obviousness for combining of stadium seat object of Nurmi into Wan is provided above.
But the combination does not explicitly disclose the [[ three-dimensional ]] stadium seat object with [[ a seat number ]] printed on it.
However, Smith teaches the three-dimensional stadium seat object with a seat number printed on it (Neumann, Paragraph [0007], “user ticketing information, to guide the user in navigating to another part of the venue, e.g. the user's seat” [0005], “example, text from an object captured in the digital image is recognized, e.g., using text recognition and object recognition. The text, in this example is indicative of a location, e.g., a sign indicating a corresponding section in a stadium, which is used to directly determine a location with respect to a digital map” [0018], “receives a digital image, a digital ticket, and 2D and 3D maps” [0037], “The AR digital content 126, for instance, may describe a location of a seat, directions to the seat, a relation of that seat to other seats, directions to desired services available at the physical environment 106,” [0060], “The location determination system 120 also includes access to digital images 114 captured by the digital camera 112, e.g., as part of a "live stream." Access to the digital ticket 208 is also permitted, which may include functionality usable to permit user access to the physical environment (e.g., a bar code, QR code), data describing where such access is permitted ( e.g., suite number, seat number, section number, level, parking spot, field access)”).
Neumann and Wan are analogous since both of them are dealing with processing data in augmented reality environment. Wan provided a way of recognized object from image and superimposing action button on the image when dealing with data in the augmented reality environment. Neumann provided a way of recognized object from digital medium of stadium ticket and identify the seat number from digital image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate ticket information identification taught by Neumann into modified invention off Wan such that when dealing with data in the three dimension world, especially dealing with stadium ticket image, system will be able to identify the seat number from the ticket and easily for user to access the seat during the event in the stadium which create more user friendly access to the information in the augmented reality environment.
Regarding Claim 13, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale.
Claim(s) 9, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wan (US 20140111542 A1), in view of Nurmi (US 20120221241 A1) as applied to Claim 1 above and future in view of Bjontegard (US 20150262208 A1).
Regarding Claim 9, the combination of Wan and Nurmi teaches the invention in Claim 1.
The combination further further teaches wherein the selected media item comprises a [[ call ]] button to call a service to the stadium seat object (Wan, Paragraph [0084], “the AR content 40 is a menu of buttons 40A, 40B, 40C as depicted in FIG. 4 displayed within a border 40 positioned proximal to the detected text 41 in the live video feed 49”).
But Wan does not explicitly disclose [[ wherein the selected media item comprises a ]] call [[ button ]].
However, Bjontegard teaches the selected media item comprises a call button to call a service to the stadium seat object (Bjontegard, Paragraph [0097], “As the fan leaves the stadium, the fan's connected device will cross the established geo-fence and a "thank you for coming" message can be displayed. This can include an interactive button with an offer and a call to action such as "come back Thursday night--buy now and get 2 tickets for the price of one"
[0198], “This can be an AR game such as a soccer penalty kick game, basketball free throw game, or a baseball bating versus pitcher game, basically anything that is related to the sport being played in the stadium that the ticket will provide entrance to” [0198], “This complete solution thereby enables the stadium owner and/or team to communicate with their fans from the moment they purchase their tickets, as they are coming to the stadium”).
Bjontegard and Wan are analogous since both of them are dealing with processing data in augmented reality environment. Wan provided a way of recognized object from image and superimposing action button on the image to call for ticket service when dealing with data in the augmented reality environment. Bjontegard provided a way of overlaying call button on the image while dealing with objects on the image in the augmented reality environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate overlaying call button taught by Bjontegard into modified invention off Wan such that when dealing with data in the augmented reality environment, system will be able to provide phone call button and allow user to make phone call to deal with ticket and/or seating information of the ticket they purchases which provide more user friendly user interface when using the system.
Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale.
Response to Arguments
Applicant’s arguments with respect to claim 1, filed on 03/02/2015, with respect to rejection under 35 USC § 103 in regard to prior arts combination image analysis does not appear to be involved in recognizing the stadium seat object has been reviewed but it is not persuasive.
In response to the argument, prior at Wan teaches image-based recognition of indicia on a physical object and AR overlay since Wan describes using a mobile camera to detect text in a live video feed, convert it to machine text with OCR, retrieve content and superimpose AR content on the camera view which automatically (no user input). The physical signage and printed indicia are generic and not limited to shops, the same technique would read on “seat labels/indica. Prior art Nurmi provides the specific association in the stadium context since it shows an indoor positioning system with accuracy down to the seat level; the route platform marks the likely user’s seat (star symbol) and overlays the route to that seat on the image. This directly supports associating the recognized stadium seat object with a record and superimposing information onto a captured image of stadium seating. In view of these analysis, it have been obvious to combine Wan’s OCR-based recognition to the seat indicia visible on the stadium seat object and to use Nurmi’s stadium/seat association to link the recognized seat to a record then select and superimpose interactive media (AR content) response to the recognition/association exactly as claimed. Hence the combination of prior arts fully anticipate the limitations. Therefore, applicant remark cannot be considered persuasive.
Applicant’s arguments with respect to claim 9, 19, filed on 9/8/2025 with respect to rejection under 35 USC § 103 have been considered but are moot in view of the new ground(s) of rejection.
In regard to Claims 2-8, 10, they directly/indirectly depends on independent Claim 1. Applicant does not argue anything other than the independent Claim 1. The limitations in those claims in conjunction with combination previously established as explained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20110141141 A1 Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image.
US 10325410 B1 Augmented reality for enhancing sporting events
US 8885982 B2 Object information derived from object images.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YuJang Tswei/Primary Examiner, Art Unit 2614