Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 09/17/2025 have been fully considered but they are not persuasive.
On page 6 (as written), the applicant argues the objections, as the applicant has amended the claims. Therefore the objection is withdrawn.
On page 6, “U.S.C. §112(b) as allegedly being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor regards as the invention. In particular, the Examiner alleges it is not clear if the "voice" recited in the claims is different or the same as the "spoken input." Applicant disagrees and points out that it is clear that the "spoken" refers to being spoken by a user and the digital avatar reacts with voice. It follows directly from the claims that the 'voice' is NOT the same as the 'spoken input' and that 'voice' is used to refer to a property of the digital avatar, whereas 'spoken input' is used to refer to a user-provided input.” The examiner accepts the applicant’s interpretation of voice/spoken. The examiner notes the applicant has not particular clarified all of the claim language thus it will be further discussed in the 112 rejection.
The applicant’s argument on page 7 regarding the previously pending rejection under 101 is persuasive.
On page 7, the applicant argues, “Since the limitations of claim 25 are incorporated into claim 16, the rejection of claim 25 is addressed here with respect to claim 16. The Examiner alleges that Wildhaber's Fig. 9A discloses a humanoid digital avatar.
Unlike amended claim 16, Wildhaber fails to disclose, teach, or suggest "the digital
avatar is a humanoid digital avatar." The figure in the upper right portion of Wildhaber's Fig. 9A (reproduced below) alleged by the Examiner to be a humanoid avatar, is not an avatar as recited in amended claim 16, at least because it is merely indicia of the self status of a player and it is not is capable of interacting with a user of the device with voice combined with at least one of gesture and facial expression.”
The examiner notes the humanoid face indicating the self-status of a player, does not exclude the face itself from being an avatar. The face is a digital or graphic representative of the player and thus is an avatar. The face is that of a humanoid. Therefore the avatar is considered to be a humanoid avatar. The avatar is in a self status window.
PNG
media_image1.png
592
787
media_image1.png
Greyscale
Fig. 9A: avatar(s) in Wildhaber.
On page 8, the applicant is arguing, “On page 6 of the Office Action, the Examiner alleges that Wildhaber’s Fig. 9B (reproduced above) discloses ‘a map, from which the players can navigate, shows digital avatars of players’. However, Wildhaber’s Fig. 9B fails to disclose, teach, or suggest and humanoid digital avatars. Instead, Wildhaber’s Fig. 9B is a map that only shows locations of other players, indicated by a simple dot. There is no incentive whatsoever in Wildhaber, to display humanoid avatars in the map view of Fig 9B, as this map view is purely meant to indicate the geolocation of other players in a game session, played by multiple players with mobile devices, see Wildhaber, par. [0134].” To which the examiner respectfully disagrees.
The examiner notes the simple dot as the applicant is arguing, can also be considered to be avatars as well. It is obvious to substitute an avatar of any type for another avatar.
PNG
media_image2.png
688
794
media_image2.png
Greyscale
Fig. 9B: Avatars
On page 8-9, the applicant argues, “Yang fails to remedy the deficiencies of Wildhaber and it also not obvious to combine Wildhaber with the teachings of Yang, which merely teaches the use of a humanoid avatar to front an AI-based chatbot. Yang is not concerned with maps, with geographical location, or with other human players of a game session. On the contrary, in Wildhaber, the basic dot-shaped, non- humanoid avatars appearing in the map view of Fig. 9B have the sole purpose of displaying the geographical position of real, human players. There is simply no reason at all why the skilled person would combine Wildahber with Yang. Furthermore, even if the skilled person would combine Wildhaber and Yang, there is no incentive to take the AI-based avatar features of Yang and implement this in the map view of Fig. 9B of Wildhaber. This would overly complicate the map view of Wildhaber, thereby interfering with game play, rendering the game less playable. Replacing the simple dots of Fig. 9B by humanoid digital avatars would needlessly clutter the map view of Fig. 9B. This would furthermore be non-practical, as the map view of Fig. 9B is displayed on a mobile device, which per definition has a limited screen real-estate. The humanoid digital avatar of Yang, as shown in the top part of Fig. 1, covers most of the screen of a mobile device. In order to implement this humanoid digital avatar of Yang into Fig. 9B of Wildhaber, the avatar would either have to be scaled down dramatically, to a uselessly small dimension, so as not clutter and completely obscure the map view of Fig. 9B, or the avatar of Yang would have to be used at its regular size, which would then totally obscure the map view of Fig. 9B so as to become completely unusable. Therefore, the skilled person would not implement the humanoid digital avatar of Yang in the map view of Fig. 9B of Wildhaber. “ To which the examiner disagrees.
The applicant seems to be arguing AI-based avatar-features of Yang but there is no limitations in the claims addressing AI-based avatar-features. Additionally, the applicant generally asserts that particular references general statements rather than discuss the actual rejection itself.
On page 9-11, “Independent claims 27 and 30 include amendments similar to those made for claim 16.
Therefore, claims 27 and 30 are allowable over the proposed combination of Wildhaber and Yang
at least for the reasons presented above for claim 16. Accordingly, Applicant respectfully requests reconsideration and withdrawal of the Wildhaber and Yang based section 103 rejection of claims 7 and 30.
On page 12 of the Office Action, the Examiner rejects claim 18 under 35 U.S.C. §103 as allegedly being unpatentable over Wildhaber in view of Yang et al., in further view of U.S. Patent Application No. US20230343053 to Scapel et al. Applicant disagrees and traverses the rejection below.
Claim 18 depends from claim 16, which is allowable over Wildhaber and Yang, at least for the reasons presented above for claim 16. Scapel fails to remedy the deficiencies of Wildhaber and Yang. For example, Scapel merely discloses "techniques for creating and editing avatars." Scapel, para. [0002].Therefore, claim 18 is allowable over the proposed combination of Wildhaber, Yang, and Scapel. Accordingly, Applicant respectfully requests reconsideration and withdrawal of the Wildhaber, Yang, and Scapel based section 103 rejection of claim 18.
On page 13 of the Office Action, the Examiner rejects claim 19 under 35 U.S.C. §103 as allegedly being unpatentable over Wildhaber in view of Yang et al., in further view of U.S. Patent Application Publication No. 2012/0122574 to Fitzpatrick. Applicant disagrees and traverses the rejection below.
Claim 19 depends from claim 16, which is allowable over Wildhaber and Yang, at least for the reasons presented above for claim 16. Fitzpatrick fails to remedy the deficiencies of Wildhaber and Yang. For example, Fitzpatrick merely discloses "motion capture data and displaying information based upon motion analysis." Fitzpatrick, para. [0003]. Therefore, claim 19 is allowable over the proposed combination of Wildhaber, Yang, and Fitzpatrick. Accordingly, Applicant respectfully requests reconsideration and withdrawal of the Wildhaber, Yang, and Fitzpatrick based section 103 rejection of claim 19.
On page 14 of the Office Action, the Examiner rejects claim 20 under 35 U.S.C. §103 as allegedly being unpatentable over Wildhaber in view of Yang et al., in further view of U.S. Patent Application Publication No. 2017/0305349 to Naboulsi. Applicant disagrees and traverses the rejection below.
Claim 20 depends from claim 16 which is allowable over Wildhaber and Yang at least for the reasons presented above for claim 16. Naboulsi fails to remedy the deficiencies of Wildhaber and Yang. For example, Naboulsi merely discloses "a method for adjusting an outside mirror and inside mirror." Naboulsi, Abstract. Therefore, claim 20 is allowable over the proposed combination of Wildhaber, Yang, and Naboulsi. Accordingly, Applicant respectfully requests reconsideration and withdrawal of the Wildhaber, Yang, and Naboulsi based section 103 rejection of claim 20.
On pages 14-16 of the Office Action, the Examiner rejects claims 21-24 under 35 U.S.C.§103 as allegedly being unpatentable over Wildhaber in view of Yang et al., in further view of U.S. Patent Application Publication No. 2022/0028173 to Shiffman. Applicant disagrees and traverses the rejection below.
Claims 21-24 depend from claim 16, which is allowable over Wildhaber and Yang, at least for the reasons presented above for claim 16. Shiffman fails to remedy the deficiencies of Wildhaber and Yang. For example, Shiffman merely discloses a "software-based solution for rendering digital crowds in real time." Shiffman, Abstract. Therefore, claims 21-24 are allowable over the proposed combination of Wildhaber, Yang, and Shiffman. Accordingly, Applicant respectfully requests reconsideration and withdrawal of the Wildhaber, Yang, and Shiffman based section 103 rejection of claims 21-24. “ T which the examiner respectfully disagrees.
The applicant is arguing that their arguments apply to other independents. As the argument for the narrowest independent is not persuasive it isn’t persuasive for the broader claims as well. Additionally, the applicant generally asserts that particular references general statements rather than discuss the actual rejection itself.
For the above reasons the applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 16-24, 26, 27 and 30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claim 27, […] “wherein the digital avatar displayed on the screen is capable of interacting with a user of the computer device with voice combined with at least one of gesture and facial expression;
receiving a spoken input from the user via at least one microphone;
responding or reacting, by the digital avatar, to the input from the user, with voice combined with gestures and/or facial expressions;
detecting a predetermined trigger event; and
in response to detecting the predetermined trigger event, switching to displaying on the screen a mixed or augmented reality view, wherein the mixed or augmented reality view includes an image captured by a camera associated with the computer device, and wherein a digital avatar is overlaid on the image captured by the camera, which the digital avatar [[is ]]being capable of interacting with a user of the mobile device with voice combined with at least one of gesture and facial expression […]”
The language “responding or reacting, by the digital avatar, to the input from the user, with voice combined with gestures and/or facial expressions;” provides problems. Is “the input” referencing a spoken input, is the input a gesture, a facial expression, a spoken input or another input not listed? The examiner will interpret “the input” as “the spoken input” for applying prior art.
The other independent claims such as claims 30 and 16 are indefinite under similar rationale as claim 27. The dependent claims are indefinite as they inherit the indefinite of the dependent claims.
The examiner notes as “the input” was a part of the previous rejection and was part of the lack of clarity of the previous rejection and thus is addressed in more detail since other elements have been clarified.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 16, 17, 26, 27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over WILDHABER (US 20210283513 A1) in view of Yang et al. (US 20230230303 A1).
Regarding claim 16, WILDHABER teaches a computer-implemented method (See abstract and ¶66: The method is implemented by a computer), comprising:
displaying on a screen of a device a navigation map with a digital avatar (Fig. 9B shows a map from which the players can navigate shows digital avatars of players. Also see ¶51, ¶134), wherein the digital avatar displayed on the screen is capable of interacting with a user of the device (See Fig. 9B, shows the avatars from which the users are interacting with who are playing the game. Fig. 9A shows the gamer players interacting with each other. ) [...]
detecting a predetermined trigger event (See ¶153, “. Using map data obtained by a map server accessible by the devices (for instance the server C), a map view where the location of each player is indicated, as represented on the map of FIG. 9B. A button or other user interface element on the touch surface may be used to switch between augmented reality view and map view.” The touch triggers the predetermined event, to switch between augmented reality view and map view.); and
in response to detecting the predetermined trigger event, switching to displaying on the screen a mixed or augmented reality view, wherein the mixed or augmented reality view includes an image captured by a camera associated with the device (See ¶153, “. Using map data obtained by a map server accessible by the devices (for instance the server C), a map view where the location of each player is indicated, as represented on the map of FIG. 9B. A button or other user interface element on the touch surface may be used to switch between augmented reality view and map view.” See ¶14-17: camera for capturing the scene/view), and wherein the digital avatar is overlaid on the image captured by the camera (See Fig. 9A The avatar in the Self status Window in the upper right. Also see ¶134); wherein the digital avatar is a humanoid digital avatar (WILDHABER Fig. 9A, shows in the upper right portion a humanoid avatar).
but doesn’t explicitly disclose: with voice combined with at least one of gesture and facial expression; receiving a spoken input from the user via at least one microphone of the device;
the digital avatar responding or reacting to the input from the user, with voice combined with gestures and/or facial expressions;
Yang teaches wherein the digital avatar displayed on the screen is capable of interacting with a user of the device with voice combined with at least one of facial expression (See Fig. 1. Figure 1 shows an avatar on a mobile device screen, The later avatar is adjusted with facial expressions accorded to uttered voice. See ¶7, ¶9 MPEP 2173.05(h));
receiving a spoken input from the user via at least one microphone of the device (¶42, “An electronic device according to an embodiment of the disclosure may transmit a user-uttered voice obtained using a microphone or the like and spatial information obtained using a camera or the like to a server, receive avatar voice information and an avatar facial expression sequence from the server, and generate an avatar animation, based on avatar facial expression data and avatar lip sync data and provide the generated avatar animation to the user.” The input is from a microphone.);
the digital avatar responding or reacting to the input from the user, with voice combined with facial expressions (See Fig. 1. Figure 1 shows an avatar on a mobile device screen, The later avatar is adjusted with facial expressions accorded to uttered voice. See ¶7, ¶9, disclose that the avatar answers with voice and animations with facial expression. MPEP 2173.05(h)).
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang to combine the known methods of WILDHABER in view of Yang to yield predictable results.
Regarding claim 17, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, wherein the displayed digital avatar is positioned in real-time on the navigation map at a position corresponding to the current location of said device (WILDHABER ¶83, discloses processing or tracking in real-time. See Fig. 9, the avatars are dots/circles with various fill ins. The avatars are representative of geolocation data, that is interpreted as real-time data. ¶149:GPS/GLONASS are real time geolocation).
Regarding claim 26, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, wherein said device is a mobile device, such as a smartphone or a tablet computer (WILDHABER ¶66, ¶68, discloses smartphone (mobile device) or tablet computer).
Regarding claim 27, WILDHABER teaches a non-transitory computer readable storage medium having a computer code stored thereon, which when executed on a computer device causes the computer device to perform (WILDHABER claim 25 and ¶144), the steps of:
displaying on a screen of the computer device a navigation map with a digital avatar (Fig. 9B shows a map from which the players can navigate shows digital avatars of players. Also see ¶51, ¶134), wherein the digital avatar displayed on the screen is capable of interacting with a user of the
computer device (See Fig. 9B, shows the avatars from which the users are interacting with who are playing the game. Fig. 9A shows the gamer players interacting with each other. ) […]
detecting a predetermined trigger event See ¶153, “. Using map data obtained by a map server accessible by the devices (for instance the server C), a map view where the location of each player is indicated, as represented on the map of FIG. 9B. A button or other user interface element on the touch surface may be used to switch between augmented reality view and map view.” The touch triggers the predetermined event, to switch between augmented reality view and map view.); and
in response to detecting the predetermined trigger event, switching to displaying on the screen a mixed or augmented reality view, wherein the mixed or augmented reality view includes an image captured by a camera associated with the computer device (See ¶153, “. Using map data obtained by a map server accessible by the devices (for instance the server C), a map view where the location of each player is indicated, as represented on the map of FIG. 9B. A button or other user interface element on the touch surface may be used to switch between augmented reality view and map view.” See ¶14-17: camera for capturing the scene/view), and wherein a digital avatar is overlaid on the image captured by the camera (See Fig. 9A The avatar in the Self status Window in the upper right. Also see ¶134), […] wherein the digital avatar is a humanoid digital avatar (WILDHABER Fig. 9A, shows in the upper right portion a humanoid avatar).but doesn’t explicitly disclose:
with voice combined with at least one of gesture and facial expression; receiving a spoken input from the user via at least one microphone;
responding or reacting, by the digital avatar, to the input from the user, with voice combined with gestures and/or facial expressions;
the digital avatar being capable of interacting with a user of the mobile device with voice combined with at least one of gesture and facial expression.
Yang teaches wherein the digital avatar displayed on the screen is capable of interacting with a user of the computer device with voice combined with at least one facial expression (See Fig. 1. Figure 1 shows an avatar on a mobile device screen, The later avatar is adjusted with facial expressions accorded to uttered voice. See ¶7, ¶9 MPEP 2173.05(h));
; receiving a spoken input from the user via at least one microphone (¶42, “An electronic device according to an embodiment of the disclosure may transmit a user-uttered voice obtained using a microphone or the like and spatial information obtained using a camera or the like to a server, receive avatar voice information and an avatar facial expression sequence from the server, and generate an avatar animation, based on avatar facial expression data and avatar lip sync data and provide the generated avatar animation to the user.” The input is from a microphone.);
responding or reacting, by the digital avatar, to the input from the user, with voice combined with gestures and/or facial expressions (See Fig. 1. Figure 1 shows an avatar on a mobile device screen, The later avatar is adjusted with facial expressions accorded to uttered voice. See ¶7, ¶9, disclose that the avatar answers with voice and animations with facial expression. MPEP 2173.05(h)).;
which digital avatar is capable of interacting with a user of the mobile device with voice combined with at least one of facial expression (See Fig. 1. Figure 1 shows an avatar on a mobile device screen, The later avatar is adjusted with facial expressions accorded to uttered voice. See ¶7, ¶9, disclose that the avatar answers with voice and animations with facial expression. MPEP 2173.05(h)).
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang to combine the known methods of WILDHABER in view of Yang to yield predictable results.
Claim 30 recites similar limitations to that of claim 27 and 16 and thus is rejected under similar rationale as detailed above.
Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over WILDHABER (US 20210283513 A1) in view of Yang et al. (US 20230230303 A1) in further view of Scapel et al. (US 20230343053 A1).
Regarding claim 18, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, wherein the predetermined trigger event is the user [...]up on the screen from the displayed digital avatar (See ¶134: the user touches to trigger the event) but doesn’t explicitly disclose swipe-up.
Scapel teaches swipe-up (¶299, “In accordance with some embodiments, the avatar navigation user interface includes a first affordance (e.g., 682) (e.g., a selectable, displayed avatar or an “edit” affordance (that is not an avatar)). While the avatar navigation user interface is displayed, the electronic device detects, via the one or more input devices, a gesture directed to the first affordance (e.g., a touch gesture on a touch screen display at a location that corresponds to the “edit” affordance or the displayed avatar or a swipe gesture in a third direction that is different from the first direction such as a swipe up gesture). In response to detecting the gesture directed to the first affordance, the electronic device displays an avatar library user interface (e.g., 686). The avatar library user interface includes a second affordance (e.g., 648) (e.g., “new avatar” or “plus” affordance) and one or more avatars of the first type.” The examiner notes that Scapel teaches swipe up.).
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang in in further view of Scapel to substitute known element for another to obtain predictable results.
Claim(s) 19 is rejected under 35 U.S.C. 103 as being unpatentable over WILDHABER (US 20210283513 A1) in view of Yang et al. (US 20230230303 A1) in further view of Fitzpatrick et al. (US 20120122574 A1).
Regarding claim 19, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, wherein the
predetermined trigger event is the user […]of the device.
FITZPATRICK performing an upward shake (¶162, Shake up. Fig. 43A, illustrates the option to shake up-down. )
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang in in further view of FITZPATRICK to substitute known element for another to obtain predictable results.
Claim(s) 20 is rejected under 35 U.S.C. 103 as being unpatentable over WILDHABER (US 20210283513 A1) in view of Yang et al. (US 20230230303 A1) in further view of NABOULSI (US 20170305349 A1).
Regarding claim 20, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, but doesn’t explicitly disclose wherein the mixed or augmented reality view is a 3D augmented street view map.
NABOULSI teaches wherein the mixed or augmented reality view is a 3D augmented street view map (¶307, an AR display, that shows a 3d street view, Also see Fig. 23 element 171).
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang in in further view of NABOULSI to substitute known element for another to obtain predictable results.
Claim(s) 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over WILDHABER (US 20210283513 A1) in view of Yang et al. (US 20230230303 A1) in further view of Shiffman (US 20220028173 A1).
Regarding claim 21, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, but doesn’t explicitly disclose wherein the mixed or augmented reality view is a 3D augmented in-venue map of a venue.
Shiffman teaches wherein the mixed or augmented reality view is a 3D augmented in-venue map of a venue (¶13, ¶19: A 3d map, that is presented in AR at a venue.).
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang in further view of Stiffman to combine the known methods of WILDHABER in view of Yang in further view of Stiffman to yield predictable results.
Regarding claim 22, WILDHABER in view of Yang teaches a computer-implemented method according to claim 16, but doesn’t explicitly disclose wherein the mixed or augmented reality view is a 3D augmented in-venue map of a venue, and wherein the predetermined trigger event is the device connecting to an in-venue wireless communication system of said venue.
Stiffman teaches wherein the mixed or augmented reality view is a 3D augmented in-venue map of a venue, and wherein the predetermined trigger event is the device connecting to an in-venue wireless communication system of said venue (¶13, ¶19: A 3d map, that is presented in AR at a venue. ¶79-80, ¶86: Describes an in-venue wireless system.).
Therefore it would have been obvious to a person of ordinary skill at the time of the instant application filing to combine WILDHABER in view of Yang in further view of Stiffman to combine the known methods of WILDHABER in view of Yang in further view of Stiffman to yield predictable results as the known predetermined trigger event of WILDHABER could be implemented in any device as combined with the aforementioned prior arts.
.
Regarding claim 23, WILDHABER in view of Yang in further view of Stiffman teaches a computer-implemented method according to claim 21, wherein augmented reality related data for the 3D augmented in-venue map is received by the device exclusively from a local server of said venue via an in-venue wireless communication system of said venue (¶13, ¶19: A 3d map, that is presented in AR at a venue. ¶79-80, ¶86: Describes an in-venue wireless system. The server described in interpreted as being local).
Regarding claim 24, WILDHABER in view of Yang in further view of Stiffman teaches a computer-implemented method according to claim 22, wherein augmented reality related data for the 3D augmented in-venue map is received by the device exclusively from a local server of said venue via the in-venue wireless communication system of said venue (¶13, ¶19: A 3d map, that is presented in AR at a venue. ¶79-80, ¶86: Describes an in-venue wireless system. The server described in interpreted as being local).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT J CRADDOCK whose telephone number is (571)270-7502. The examiner can normally be reached Monday - Friday 10:00 AM - 6 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT J CRADDOCK/Primary Examiner, Art Unit 2618