DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is responsive to communication filed on 02/23/2026.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114 was filed in this application after a decision by the Patent Trial and Appeal Board, but before the filing of a Notice of Appeal to the Court of Appeals for the Federal Circuit or the commencement of a civil action. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 02/23/2026 has been entered.
Response to Amendment
The Examiner has acknowledged the amended claims 1 - 2, 4, 11 - 12, 14, and 16 – 17.
Response to Arguments
Applicant's arguments filed on 02/23/2026 have been fully considered but they are not persuasive.
Regarding Applicant’s argument that Oetting does not disclose at least the "participant interactions [comprising] participant- initiated actions directed to virtual expo elements" and a "first plurality of participant interactions [comprising] a hover over a virtual expo element or a movement towards a virtual expo element," as recited in amended claim 1.
The Examiner respectfully disagrees with Applicant’s assertion because Oetting discloses that the participants may be able to control the appearances of their avatars, and, depending upon the technology used by the participants to join the virtual event (e.g., whether the participants have access to technology that can track movements), the avatars may even mimic the movements and gestures of the participants in the virtual event. For instance, if a participant waves his left arm in the “real world,” the participant's avatar may wave his left arm in the space of the virtual event (see paragraphs [0032], [0042 – 0043]).
The Examiner wants to point out to the Applicants that the claims recite an alternative "first plurality of participant interactions [comprising] a hover over a virtual expo element or a movement towards a virtual expo element," and Oetting discloses one of the alternative which is a movement towards a virtual expo element.
The Examiner has shown a secondary discloses “a plurality of participant interactions comprising a hover over a virtual expo”(see the alternative rejection under 35 U.S.C. below).
Thus, the Oetting reference reads on the claimed limitation.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 5 – 11, 15 – 16, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Oetting et al (US 20220172415; hereinafter Oetting).
As per claim 1, Oetting teaches a method comprising:
receiving, at a video conference server, interaction data representing a plurality of participant interactions in a virtual expo with a plurality of virtual expo elements [virtual space for professional conference (pp 0011, 0013) as participants navigate a virtual expo space (pp [0046], [0054]; the modification may involve moving (claimed navigate) the first participant to a second location in the virtual event that is different than the first location); server collects data provided by user endpoint participants (pp 0020, 0041); real time monitoring (pp 0043); feedback provided by participant (pp 0051); continuously monitor behavior (pp 0053)], wherein at least one of the plurality of interactions comprises a gesture (paragraph [0032]; Oetting discloses that the avatars may even mimic the movements and gestures of the participants in the virtual event. For instance, if a participant waves his left arm in the “real world,” the participant's avatar may wave his left arm in the space of the virtual event), wherein the participants interactions comprise participant-initiated actions directed to virtual expo elements (paragraphs [0042], [0043], [0047]; Oetting discloses that if the first participant's profile indicates that she prefers to sit during concerts, and the other participants in the first location begin dancing (thereby potentially obstructing the first participant's view of the stage), then the processing system may determine that a modification to the first location should be made. Similarly, if the processing system detects the first participant making a face that appears to express annoyance (claimed participant-initiated actions), stating that another participant is making it hard for her to hear the band, or sends a text message to the processing system asking to be moved to another location, then the processing system may determine that a modification to the first location should be made)); and
the first plurality of participant interactions comprise a movement towards a virtual expo element (paragraphs [0032], [0042 – 0043]; Oetting discloses that the participants may be able to control the appearances of their avatars, and, depending upon the technology used by the participants to join the virtual event (e.g., whether the participants have access to technology that can track movements), the avatars may even mimic the movements and gestures of the participants in the virtual event. For instance, if a participant waves his left arm in the “real world,” the participant's avatar may wave his left arm in the space of the virtual event.)
storing the interaction data in a data store, the interaction data including a participant identifier, a virtual expo element identifier of a virtual expo element that is interacted with in response to a participant action within the virtual expo space, and an interaction characteristic (store user information (pp 0021, 0036, 0054, 0059);
determining one or more analytics far one or more of the plurality of virtual expo elements based on the interaction data [monitor participant behavior such as other participants interference with enjoyment (pp 0020) and
causing the one or more analytics for one or more of the plurality of virtual expo elements to be displayed on a client device [provide personalized experience (pp 0022-0023); modify location of the participant (pp 0054-0053).
As per claim 5, Oetting teaches the method of claim 1, wherein the interaction data comprises real-time engagement data [participant behavior (pp 0020; real time monitoring (pp 00433; feedback provided by participant (pp 0051); continuously monitor behavior (pp 053)].
As per claim 6, Oetting teaches a method of claim 1, wherein the interaction data comprises at least one of a timestamp, a click, an audio recording, a transcript, a gaze target, a facial expression, or a gesture [detect annoyance [pp 0043)].
As per claim 7, Getting teaches the method of claim 6, wherein the transcript is associated with a chat or conversation [text message conversation (pp 0042-0043)].
As per claim 8, Getting teaches the method of claim 7, wherein the chat comprises one of a spatial chat or a booth chat [virtual booth (pp 0044) conversation with system (pp 0042- 0043)].
As per claim 9, Getting teaches the method of claim 1, wherein the plurality of virtual expo elements comprises one or more of a virtual expo floor, a virtual expo booth, or a participant avatar [real event includes rendering of an avatar foo 0018, 0032): virtual booth (pp 0044)].
As per claim 10, Oetting teaches the method of claim 8, wherein the virtual expo booth comprises one or more of a video content, an audio content, a text file, or a document [rendering video (pp 0012 virtual booth (pp 0044)].
Claims 11 and 16 are rejected under the same rationale as claim 1 as they do not further limit or define over the claims.
Claims 15 and 20 are rejected under the same rationale as claim 5 as they do not further limit or define over the claim.
Claims 2 - 4, 12 - 14 and 17 - 19 are rejected under 35 U.S.C. 103 as being unpatentable over Oetting et al (US 20220172415; hereinafter Oetting) in view of Graff et al (2014/0343994; hereinafter Graff).
As per claim 2, Oetting teaches he method of claim 1, further comprising: receiving, at the video conference server, physical interaction data representing a second plurality of participant interactions in a physical expo with a plurality of physical expo elements [virtual space for professional conference (pp 0011, 0013) server collects data provided by user endpoint participants (pp 0020, 0044): real time monitoring (pop 0043); feedback provided by participant (pp0051): continuously monitor behavior (pp 0053)-
Oetting does not explicitly teach the physical expo associated with the virtual expo: storing the physical interaction data in the data store, the physical interaction data including the participant identifier, a physical expo element identifier, and the interaction characteristic; determining one or more analytics for one or more of the plurality of physical expo elements based on the physical interaction data: and causing the one or more analytics for the one or more of the plurality of physical expo elements to be displayed on a client device with the one or more analytics for the one or more of the plurality of virtual expo elements.
However, in an analogous art, Graff teaches the physical expo associated with the virtual expo [customized experience for event attendee (pp 0105 - 0107); data on which events are attended and survey questions (pp 0140) 0142): storing details for event (pp 0144)];
storing the physical interaction data in the data store, the physical interaction data including the participant identifier, a physical expo element identifier, and the interaction characteristic [track and store physical event of the attendee including movement (pp 0110; 0116);
determining one or more analytics for one or more of the plurality of physical expo elements based on the physical interaction data [use data to drill down into activity (pp 0121; generate event activity maps (pp 0123) and
causing the one or more analytics for the one or more of the plurality of physical expo elements to be displayed on a client device with the one or more analytics for the one or more of the plurality of virtual expo elements [create optimal path for attendee based on traffic and trends (pp 0125)],
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual expo of Oetting with the virtual experience of a physical expo of Graff. A person of ordinary skill in the art would have been motivated to do this improve user experience [Oetting: pp 0022, 0043)] .
As per claim 3, Getting in view of Graff teaches the method of claim 2, wherein the plurality of physical expo elements comprises one or more of a physical expo floor location, a physical expo booth, or a participant [Graff: event seminar and key speakers {op O107); event activity maps include booth and event activity locations (pp 0123)].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual expo of Oetting with the virtual experience of a physical expo of Graff. A person of ordinary skill in the art would have been motivated to do this improve user experience [Oetting: pp 0022, 0043)] .
As per claim 4, Oetting in view of Graff teaches the method of claim 3, wherein the physical interaction data comprises one or more of a barcode scan or a location data from a Bluetooth beacon (Graff: scan QR code or barcode (pp 0115)].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual expo of Oetting with the virtual experience of a physical expo of Graff. A person of ordinary skill in the art would have been motivated to do this improve user experience [Oetting: pp 0022, 0043)] .
Claims 12-14 and 17-19 are rejected, mutatis mutandis, under the same rationale is claims 2-3 as they do not further limit or define over the claims.
In the alternative,
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 5 – 11, 15 – 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Oetting et al (US 20220172415; hereinafter Oetting) in view of Yeung et al (US 2010/0199221; hereinafter Yeung).
As per claim 1, Oetting teaches a method comprising:
receiving, at a video conference server, interaction data representing a plurality of participant interactions in a virtual expo with a plurality of virtual expo elements [virtual space for professional conference (pp 0011, 0013) as participants navigate a virtual expo space (pp [0046], [0054]; the modification may involve moving (claimed navigate) the first participant to a second location in the virtual event that is different than the first location); server collects data provided by user endpoint participants (pp 0020, 0041); real time monitoring (pp 0043); feedback provided by participant (pp 0051); continuously monitor behavior (pp 0053)], wherein at least one of the plurality of interactions comprises a gesture (paragraph [0032]; Oetting discloses that the avatars may even mimic the movements and gestures of the participants in the virtual event. For instance, if a participant waves his left arm in the “real world,” the participant's avatar may wave his left arm in the space of the virtual event), wherein the participants interactions comprise participant-initiated actions directed to virtual expo elements (paragraphs [0042], [0043], [0047]; Oetting discloses that if the first participant's profile indicates that she prefers to sit during concerts, and the other participants in the first location begin dancing (thereby potentially obstructing the first participant's view of the stage), then the processing system may determine that a modification to the first location should be made. Similarly, if the processing system detects the first participant making a face that appears to express annoyance (claimed participant-initiated actions), stating that another participant is making it hard for her to hear the band, or sends a text message to the processing system asking to be moved to another location, then the processing system may determine that a modification to the first location should be made));
storing the interaction data in a data store, the interaction data including a participant identifier, a virtual expo element identifier of a virtual expo element that is interacted with in response to a participant action within the virtual expo space, and an interaction characteristic (store user information (pp 0021, 0036, 0054, 0059);
determining one or more analytics far one or more of the plurality of virtual expo elements based on the interaction data [monitor participant behavior such as other participants interference with enjoyment (pp 0020) and
causing the one or more analytics for one or more of the plurality of virtual expo elements to be displayed on a client device [provide personalized experience (pp 0022-0023); modify location of the participant (pp 0054-0053).
Oetting discloses all the limitations, but fails to specifically disclose that the first plurality of participant interactions comprise a hover over a virtual expo element.
Yeung, in an analogous art, discloses that the first plurality of participant interactions comprise a hover over a virtual expo element (figs. 2DA-DB; paragraphs [0005 - 0006], [0026 - 0029]; Yeung discloses that dynamical variables regarding the user's interaction within and without of the virtual surface may be interpreted to determine a select action, a hover and highlight action, a drag action and a swipe action).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Oetting by showing that the first plurality of participant interactions comprise a hover over a virtual expo element as evidenced by Yeung for the purpose of determining an interpreted action based upon a relationship of the clean gesture data with respect to the virtual surface; thereby allowing humans to utilize their own bodies and gestures for interaction.
As per claim 5, Oetting teaches the method of claim 1, wherein the interaction data comprises real-time engagement data [participant behavior (pp 0020; real time monitoring (pp 00433; feedback provided by participant (pp 0051); continuously monitor behavior (pp 053)].
As per claim 6, Oetting teaches a method of claim 1, wherein the interaction data comprises at least one of a timestamp, a click, an audio recording, a transcript, a gaze target, a facial expression, or a gesture [detect annoyance [pp 0043)].
As per claim 7, Oetting teaches the method of claim 6, wherein the transcript is associated with a chat or conversation [text message conversation (pp 0042-0043)].
As per claim 8, Oetting teaches the method of claim 7, wherein the chat comprises one of a spatial chat or a booth chat [virtual booth (pp 0044) conversation with system (pp 0042- 0043)].
As per claim 9, Oetting teaches the method of claim 1, wherein the plurality of virtual expo elements comprises one or more of a virtual expo floor, a virtual expo booth, or a participant avatar [real event includes rendering of an avatar foo 0018, 0032): virtual booth (pp 0044)].
As per claim 10, Oetting teaches the method of claim 8, wherein the virtual expo booth comprises one or more of a video content, an audio content, a text file, or a document [rendering video (pp 0012 virtual booth (pp 0044)].
Claims 11 and 16 are rejected under the same rationale as claim 1 as they do not further limit or define over the claims.
Claims 15 and 20 are rejected under the same rationale as claim 5 as they do not further limit or define over the claim.
Claims 2 - 4, 12 - 14 and 17 - 19 are rejected under 35 U.S.C. 103 as being unpatentable over Oetting et al (US 20220172415; hereinafter Oetting) in view of Yeung et al (US 2010/0199221; hereinafter Yeung), and further of Graff et al (2014/0343994; hereinafter Graff).
As per claim 2, Oetting and Yeung teach he method of claim 1, further comprising: receiving, at the video conference server, physical interaction data representing a plurality of participant interactions in a physical expo with a plurality of physical expo elements [virtual space for professional conference (pp 0011, OO13) server collects data provided by user endpoint participants (pp 0020, 0044): real time monitoring (pop 0043); feedback provided by participant (pp 0051): continuously monitor behavior (pp 0053).
Oetting and Yeung do not explicitly teach the physical expo associated with the virtual expo: storing the physical interaction data in the data store, the physical interaction data including the participant identifier, a physical expo element identifier, and the interaction characteristic; determining one or more analytics for one or more of the plurality of physical expo elements based on the physical interaction data: and causing the one or more analytics for the one or more of the plurality of physical expo elements to be displayed on a client device with the one or more analytics for the one or more of the plurality of virtual expo elements.
However, in an analogous art, Graff teaches the physical expo associated with the virtual expo [customized experience for event attendee (pp 0105 - 0107); data on which events are attended and survey questions (pp 0140) 0142): storing details for event (pp 0144)];
storing the physical interaction data in the data store, the physical interaction data including the participant identifier, a physical expo element identifier, and the interaction characteristic [track and store physical event of the attendee including movement (pp 0110; 0116);
determining one or more analytics for one or more of the plurality of physical expo elements based on the physical interaction data [use data to drill down into activity (pp 0121; generate event activity maps (pp 0123) and
causing the one or more analytics for the one or more of the plurality of physical expo elements to be displayed on a client device with the one or more analytics for the one or more of the plurality of virtual expo elements [create optimal path for attendee based on traffic and trends (pp 0125)].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual expo of Oetting and Yeung with the virtual experience of a physical expo of Graff. A person of ordinary skill in the art would have been motivated to do this improve user experience.
As per claim 3, Oetting and Yeung in view of Graff teaches the method of claim 2, wherein the plurality of physical expo elements comprises one or more of a physical expo floor location, a physical expo booth, or a participant [Graff: event seminar and key speakers {op O107); event activity maps include booth and event activity locations (pp 0123)].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual expo of Oetting and Yeung with the virtual experience of a physical expo of Graff. A person of ordinary skill in the art would have been motivated to do this improve user experience.
As per claim 4, Oetting in view of Graff teaches the method of claim 3, wherein physical interaction data comprises one or more of a barcode scan or a location data from a Bluetooth beacon (Graff: scan QR code or barcode (pp 0115)].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual expo of Oetting and Yeung with the virtual experience of a physical expo of Graff. A person of ordinary skill in the art would have been motivated to do this improve user experience.
Claims 12 - 14 and 17 - 19 are rejected, mutatis mutandis, under the same rationale is claims 2 -3 as they do not further limit or define over the claims.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YVES DALENCOURT whose telephone number is (571)272-3998. The examiner can normally be reached M-F 8AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ario Etienne can be reached on 571-272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YVES DALENCOURT/ Primary Examiner, Art Unit 2457