DETAILED ACTION
This action is in response to the Amendment dated 22 December 2025. Claims 1, 2, 7, 14, 17, 20 and 21 are amended. Claim 18 has been cancelled. Claim 22 has been added. Claims 1-17 and 19-22 remain pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 5, 7, 10-17 and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2021/0110610 A1) in view of Partheesh et al. (US 2012/0172027 A1).
As for independent claim 1, Xu teaches a method comprising:
obtaining, using the one or more sensors, sensor data for the physical environment [(e.g. see Xu paragraphs 0014, 0018) ”XR anchors come in various types, including location-based anchors and marker-based anchors. In the case of location-based anchors, the location and orientation sensors of the XR device may detect, potentially in real time, the locations of the XR anchors … what portion of the real world environment will be visible on the display of the XR device … there are six degrees of freedom in defining the viewport: three axes of position (e.g., x, y, and z coordinates of the XR device in the real world) and three dimensions of viewing direction (e.g., yaw, pitch, and roll of the XR device)”].
determining a first data set for a three-dimensional environment using at least the sensor data [(e.g. see Xu paragraph 0038 and Fig. 1) ”the real world scene 114 may be viewed by a user through the device 112, e.g., on a display of a head mounted display or mobile phone, or through a set of smart glasses. As discussed above, the field of view of the device 112 and the viewing direction together define a viewport of the user. As the viewport changes, the device 112 (or alternatively the AS 104, edge server 108, or server 110) may detect one or more XR anchors 116.sub.1-116n (hereinafter individually referred to as an “XR anchor 116” or collectively referred to as “XR anchors 116”) within the viewport. In one example, some of the XR anchors 116 may be placed in known, fixed locations (such as buildings, statues, street signs, or the like)”].
running a given application [(e.g. see Xu paragraph 0016) ”an XR application executing on a head mounted display”].
generating a second data set from the first data set based on the spatial restrictions of the given application [(e.g. see Xu paragraphs 0016, 0018, 0053 and Fig. 2) ”the XR anchors that are present in the viewport (or within some configurable distance from the viewport's boundary) can be identified, as well as the digital objects that are associated with the XR anchors. This allows XR anchors that may be nearby, but are not actually present in the predicted viewport (or within the configurable distance from the viewport's boundary) to be filtered out, or removed from consideration … the processing system may remove, from the set of XR anchors, a first subset of anchors including any anchors that are not present in (or are not within some configurable distance from the boundary of) the predicted viewport. This step leaves a second subset of anchors remaining in the set, where the second subset of anchors includes anchors that are present in (or are within the configurable distance from the boundary of) the predicted viewport … an XR application executing on a head mounted display”].
providing only the second data set to the given application [(e.g. see Xu paragraphs 0013, 0061) ”When the XR device detects an XR anchor in a current image of a real world environment, the XR device may establish a connection (e.g., a communication channel) to the XR anchor and download the digital object from the XR anchor. Subsequently, the XR device may render the digital object so that the digital object appears in the XR environment, potentially in the same location as the XR anchor … In step 318, the processing system may render the digital object for presentation by the XR device. For instance, if the digital object includes a visual element (e.g., an image, a video, text, or the like), then the digital object may be displayed on a display of the XR device. In one example, the visual element of the digital object may be rendered as an overlay that can be superimposed over the images of the real world environment that are visible on the display of the XR device”].
Xu does not specifically teach of a plurality of applications, wherein at least one application of the plurality of applications has different spatial restrictions than at least another one of the other applications of the plurality of applications. However, in the same field of invention or solving similar problems, Partheesh teaches:
of a plurality of applications [(e.g. see Partheesh paragraph 0007) ”Such geofences may be used in conjunction with several mobile-based "geofence applications"”].
wherein at least one application of the plurality of applications has different spatial restrictions than at least another one of the other applications of the plurality of applications [(e.g. see Partheesh paragraphs 0021, 0027, 0029, 0035, 0036 and Fig. 4C) ”FIG. 3B illustrates a user interface allowing the user to apply geofence settings from the user's geofence profile to various "geofence applications." Examples of such geofence applications are explained in further detail below. The user is provided a list of various geofence applications allowing the user to pick one or more applications to be enabled. In addition to enabling a particular geofence application, the user also has the option of specifying a particular geofence setting to be applied to each of the enabled applications … the user may define a particular geofence location (e.g., by drawing a circle or any other shape over the displayed map to define a geofence boundary). Here, the geofence server 114 would translate the defined map (i.e., the map drawn by the user) and translate it to geographic coordinates for use by the geofence service to determine the presence of the user within a given geofence … the user can create a profile that allows the garage door to be automatically opened when a user enters a given geofence. Accordingly, when a user enters a geofence (defined, for example, by a radius about the user's home address), the geofence service detects the entry and transmits a message, for example, to the home network or an internet service that controls the wireless device attached to the garage opener application. The garage door then automatically opens up in advance (e.g., when the user is 0.2 miles away from home) … the temperature is set to 76 F only when the geofence application detects that the user is within a 2-mile radius of the house. In this example, the user would have previously established a 2-mile geofence around his house, and paired the geofence with the temperature-setting geofence application to coordinate such automatic location-based control … several geofence applications may be simultaneously enabled, as shown in FIG. 4C. In this example, two geofences are defined: geofence 408 and geofence 410”].
Therefore, considering the teachings of Xu and Partheesh, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add of a plurality of applications, wherein at least one application of the plurality of applications has different spatial restrictions than at least another one of the other applications of the plurality of applications, as taught by Partheesh, to the teachings of Xu because allowing different applications to automatically communicate with and control different devices present in the environment based on proximity increases the user’s convenience (e.g. see Partheesh paragraphs 0035, 0037 and abstract).
As for dependent claim 2, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein generating the second data set from the first data set based on the spatial restrictions of the given application comprises including a first subset of the first data set in the second data set and wherein the first subset of the first data set is associated with locations inside a boundary [(e.g. see Xu paragraph 0053) ”the processing system may remove, from the set of XR anchors, a first subset of anchors including any anchors that are not present in (or are not within some configurable distance from the boundary of) the predicted viewport. This step leaves a second subset of anchors remaining in the set, where the second subset of anchors includes anchors that are present in (or are within the configurable distance from the boundary of) the predicted viewport”].
As for dependent claim 4, Xu and Partheesh teach the method as described in claim 2 and Xu further teaches:
wherein the boundary is defined at least partially by line-of-sight distances to physical objects in the physical environment [(e.g. see Xu paragraphs 0017, 0038, 0043 and Fig. 2 numeral 200) ”the real world scene 114 may be viewed by a user through the device 112, e.g., on a display of a head mounted display or mobile phone, or through a set of smart glasses. As discussed above, the field of view of the device 112 and the viewing direction together define a viewport of the user … The FoV defines the extent of the observable area, which may be a fixed parameter of the XR device … an example viewport 200 for an XR device that is viewing the real world scene 114 of FIG. 1. As illustrated, based on the viewing direction and on the FoV of the XR device, the viewport 200 may comprise less than the entirety of the real world scene 114”].
As for dependent claim 5, Xu and Partheesh teach the method as described in claim 2 and Xu further teaches:
wherein the boundary is defined at least partially by user input [(e.g. see Xu paragraph 0018) ”configurable distance from the viewport's boundary”].
As for dependent claim 7, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein generating the second data set from the first data set based on the spatial restrictions of the given application comprises removing a subset of the first data set that is associated with locations outside a boundary [(e.g. see Xu paragraphs 0012, 0053) ”removing a first subset of the extended reality anchors from the set, wherein locations of anchors in the first subset of extended reality anchors fall outside of a threshold distance from a boundary of the predicted viewport … the processing system may remove, from the set of XR anchors, a first subset of anchors including any anchors that are not present in (or are not within some configurable distance from the boundary of) the predicted viewport. This step leaves a second subset of anchors remaining in the set, where the second subset of anchors includes anchors that are present in (or are within the configurable distance from the boundary of) the predicted viewport”].
As for dependent claim 10, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein the one or more sensors comprises one or more cameras and wherein the sensor data comprises camera data [(e.g. see Xu paragraphs 0045, 0083) ”various input/output devices 506, e.g., a camera, a video camera … device 112 may periodically send actual measured viewport information … camera position information”].
As for dependent claim 11, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein the one or more sensors comprises one or more accelerometers and wherein the sensor data comprises accelerometer data [(e.g. see Xu paragraphs 0029) ”The current use context may be inferred from data collected by sensors of the XR device. For instance, an accelerometer”].
As for dependent claim 12, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein the first data set comprises a three-dimensional representation of the physical environment [(e.g. see Xu paragraphs 0013, 0049) ”XR anchors to determine which digital objects should be rendered at which locations in the real world environment to produce the XR media. In some examples, the XR anchors may predefine precise locations in the real world environment at which certain types of digital objects may be introduced … these six degrees of freedom include three axes of position (e.g., x, y, and z coordinates of the XR device in the real world) and three dimensions of viewing direction (e.g., yaw, pitch, and roll of the XR device)”].
As for dependent claim 13, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein the electronic device further comprises a display that is configured to display a virtual object in the three-dimensional environment and wherein the first data set comprises data regarding the virtual object in the three-dimensional environment [(e.g. see Xu paragraphs 0013, 0061) ”When the XR device detects an XR anchor in a current image of a real world environment, the XR device may establish a connection (e.g., a communication channel) to the XR anchor and download the digital object from the XR anchor. Subsequently, the XR device may render the digital object so that the digital object appears in the XR environment, potentially in the same location as the XR anchor … In step 318, the processing system may render the digital object for presentation by the XR device. For instance, if the digital object includes a visual element (e.g., an image, a video, text, or the like), then the digital object may be displayed on a display of the XR device. In one example, the visual element of the digital object may be rendered as an overlay that can be superimposed over the images of the real world environment that are visible on the display of the XR device”].
As for dependent claim 14, Xu and Partheesh teach the method as described in claim 1 and Xu further teaches:
wherein generating the second data set comprises generating the second data set from the first data set based on spatial and temporal restrictions of the given application [(e.g. see Xu paragraph 0042) ”the viewport of the user may be predicted in advance (e.g., x seconds before the user actually views the viewport). The XR device (e.g., device 112, or a server connected to the XR device) may have prior knowledge of the locations of at least some of the XR anchors 116 in the real world environment and may be able to detect the presence of other XR anchors 116”].
As for dependent claim 15, Xu and Partheesh teach the method as described in claim 14 and Xu further teaches:
wherein generating the second data set comprises including a first subset of the first data set in the second data set and wherein the first subset of the first data set is associated with both locations inside a boundary and times after a cutoff time [(e.g. see Xu paragraphs 0018, 0043, 0060) ”what portion of the real world environment will be visible on the display of the XR device, at a given point in the future (e.g., a few seconds from the current time) … in the example illustrated in FIG. 2, the XR anchors 116.sub.2 and 116.sub.3 are visible within the viewport 200, while the XR anchors 116.sub.1 and 116n fall outside of the viewport. As such, if the XR device predicts that the viewport 200 will be visible to the user in x seconds, and if the XR device knows or can detect the locations of the XR anchors 116, then the XR device can determine that the XR anchors 116.sub.2 and 116.sub.3 are likely to be visible to the user in x seconds … processing system may detect that the current viewport of the user matches the predicted viewport that was predicted”].
As for dependent claim 16, Xu and Partheesh teach the method as described in claim 14 and Xu further teaches:
wherein generating the second data set comprises including a first subset of the first data set in the second data set and wherein the first subset of the first data set is associated with both locations inside a boundary and times before a cutoff time [(e.g. see Xu paragraphs 0018, 0053) ”what portion of the real world environment will be visible on the display of the XR device, at a given point in the future (e.g., a few seconds from the current time) … the XR anchors that are present in the viewport (or within some configurable distance from the viewport's boundary) can be identified, as well as the digital objects that are associated with the XR anchors. This allows XR anchors that may be nearby, but are not actually present in the predicted viewport (or within the configurable distance from the viewport's boundary) to be filtered out, or removed from consideration … a first subset of anchors including any anchors that are not present in (or are not within some configurable distance from the boundary of) the predicted viewport. This step leaves a second subset of anchors remaining in the set, where the second subset of anchors includes anchors that are present in (or are within the configurable distance from the boundary of) the predicted viewport”].
As for independent claim 17, Xu and Partheesh teach a method. Claim 17 discloses substantially the same limitations as claims 1 and 2. Therefore, it is rejected with the same rational as claims 1 and 2. Further, Partheesh teaches based on a second application running, using the first data set to obtain a third data set by only including, in the third data set, data for portions of the three-dimensional environment associated with locations within a second boundary that is different from the first boundary and providing the third data set to the second application [(e.g. see Partheesh paragraphs 0021, 0027, 0029, 0035, 0036 and Fig. 4C) ”FIG. 3B illustrates a user interface allowing the user to apply geofence settings from the user's geofence profile to various "geofence applications." Examples of such geofence applications are explained in further detail below. The user is provided a list of various geofence applications allowing the user to pick one or more applications to be enabled. In addition to enabling a particular geofence application, the user also has the option of specifying a particular geofence setting to be applied to each of the enabled applications … the user may define a particular geofence location (e.g., by drawing a circle or any other shape over the displayed map to define a geofence boundary). Here, the geofence server 114 would translate the defined map (i.e., the map drawn by the user) and translate it to geographic coordinates for use by the geofence service to determine the presence of the user within a given geofence … the user can create a profile that allows the garage door to be automatically opened when a user enters a given geofence. Accordingly, when a user enters a geofence (defined, for example, by a radius about the user's home address), the geofence service detects the entry and transmits a message, for example, to the home network or an internet service that controls the wireless device attached to the garage opener application. The garage door then automatically opens up in advance (e.g., when the user is 0.2 miles away from home) … the temperature is set to 76 F only when the geofence application detects that the user is within a 2-mile radius of the house. In this example, the user would have previously established a 2-mile geofence around his house, and paired the geofence with the temperature-setting geofence application to coordinate such automatic location-based control … several geofence applications may be simultaneously enabled, as shown in FIG. 4C. In this example, two geofences are defined: geofence 408 and geofence 410. The two geofences have an overlap defined by region 410. In this example, geofence 408 is associated with a phone-ringer application. Geofence 408 is associated with a garage opener application. Accordingly, the overlap region 410 is associated with both the phone-ringer and garage opener applications. Here, if the user is located at mobile location 1, the geofence service does not enable either of the two applications. If the user is located at geofence overlap 410, both applications are enabled. If the user is located at a location of geofence 408 that is not covered by the overlap region 410, only the phone-ringer application is turned on. If the user is located at a location of geofence 406 not covered by overlap region 410, only the garage opener application is enabled”].
As for dependent claim 19, Xu and Partheesh teach the method as described in claim 17; further, claim 19 discloses substantially the same limitations as claim 15. Therefore, it is rejected with the same rational as claim 15.
As for independent claim 20, Xu and Partheesh teach a device. Claim 20 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. Further, Xu teaches a head-mounted support structure; one or more sensors coupled to the head-mounted support structure and configured to obtain sensor data for the physical environment [(e.g. see Xu paragraphs 0014, 0034, 0044) ”device 112 may comprise a … a wearable computing device (e.g., smart glasses, a virtual reality (VR) headset or other type of head mounted display, or the like) … a wearable device … which may include sensor … the location and orientation sensors of the XR device”].
As for dependent claim 21, Xu and Partheesh teach the device as described in claim 20; further, claim 21 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2.
As for dependent claim 22, Xu and Partheesh teach the method as described in claim 17, but Xu does not specifically teach the following limitation. However, Partheesh teaches:
wherein the second boundary has at least one characteristic different from the first boundary, and wherein the at least one characteristic is selected from the group consisting of: a size and a shape [(e.g. see Partheesh paragraphs 0021, 0023, 0030) ”specify that the geofence is a 2 mile radius … defining a 1 mile radius around work … Using the map, the user may define a particular geofence location (e.g., by drawing a circle or any other shape over the displayed map to define a geofence boundary)”]. Examiner notes that this is a Markush group limitation in which the prior art is only required to show one of the listed alternatives.
The motivation to combine is the same as that used for claim 1.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2021/0110610 A1) in view of Partheesh et al. (US 2012/0172027 A1), as applied to claim 2 above, and further in view of Lee (US 11,232,644 B1).
As for dependent claim 3, Xu and Partheesh teach the method as described in claim 2, but do not specifically teach wherein the boundary is defined by a fixed radius around the electronic device. However, in the same field of invention, Lee teaches:
wherein the boundary is defined by a fixed radius around the electronic device [(e.g. see Lee col 8 lines 31-55 and Figs. 1C and 5A) ”The VR environment 140 may have a virtual boundary 115 corresponding to the real-world environment 100. The VR environment 140 may be a VR game, VR office, or other VR setting that is displayed in the field of view 120 of the user. The virtual boundary 115 may define or drawn mark the edge of a safe area for the user to explore … the virtual boundary 115 may correspond to real-world objects at or just beyond arm's reach of the user (e.g., a 1 meter radius around the user”].
Therefore, considering the teachings of Xu, Partheesh and Lee, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the boundary is defined by a fixed radius around the electronic device, as taught by Lee, to the teachings of Xu and Partheesh because this may allow the user to quickly assess obstacles that may be in the user's path (e.g. see Lee col 14 lines 16-18).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2021/0110610 A1) in view of Partheesh et al. (US 2012/0172027 A1), as applied to claim 2 above, and further in view of Shishido (US 2024/0007477 A1).
As for dependent claim 6, Xu and Partheesh teach the method as described in claim 2, but do not specifically teach wherein the first data set is binned into a plurality of different groups, wherein each group of the plurality of different groups has a respective centroid, and wherein the first subset of the first data set comprises each group that has a centroid inside the boundary. However, in the same field of invention, Shishido teaches:
wherein the first data set is binned into a plurality of different groups, wherein each group of the plurality of different groups has a respective centroid, and wherein the first subset of the first data set comprises each group that has a centroid inside the boundary [(e.g. see Shishido paragraphs 0037, 0061 and Figs. 4-5) ”In the example of FIG. 4, the boundary region obtainment unit 62 obtains information on a boundary region ARA for the avatar Aa, information on a boundary region ARB for the avatar Ba, and information on a boundary region ARC for the avatar Ca. In the example of FIG. 4, the boundary region ARA is a circular region around a position of the avatar Aa, the circular region having a radius equal to a distance DA, the boundary region ARB is a circular region around a position of the avatar Ba, the circular region having a radius equal to a distance DB, and the boundary region ARC is a circular region around a position of the avatar Ca, the circular region having a radius equal to a distance DC … processing unit 66 determines whether the avatar Ba is positioned in the boundary region ARA of the avatar Aa (Step S20), and in a case where the avatar Ba is not positioned in the boundary region ARA (Step S20; No), proceeds to Step S18 and generates image data on the avatar Aa for the existing mode of display, that is, for example, without imparting transparency. On the contrary, in a case where the avatar Ba is positioned in the boundary region ARA of the avatar Aa (Step S20; Yes), the image processing unit 66 generates image data on the avatar Aa for a different mode of display, that is, for example, by imparting transparency (Step S22)”].
Therefore, considering the teachings of Xu, Partheesh and Shishido, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the first data set is binned into a plurality of different groups, wherein each group of the plurality of different groups has a respective centroid, and wherein the first subset of the first data set comprises each group that has a centroid inside the boundary, as taught by Shishido, to the teachings of Xu and Partheesh because it allows privacy within an augmented or virtual environment to be appropriately protected (e.g. see Shishido paragraphs 0042, 0065).
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2021/0110610 A1) in view of Partheesh et al. (US 2012/0172027 A1), as applied to claim 1 above, and further in view of Forster (US 10,824,923 B1).
As for dependent claim 8, Xu and Partheesh teach the method as described in claim 1, but do not specifically teach wherein the first data set comprises spatial mesh data and identifies one or more objects in the three-dimensional environment. However, in the same field of invention, Forster teaches:
wherein the first data set comprises spatial mesh data and identifies one or more objects in the three-dimensional environment [(e.g. see Forster col 4 lines 22-26) ”The 3D data, which may be stored in the form of … meshes in particular embodiments, may represent a 3D model of the environment”].
Therefore, considering the teachings of Xu, Partheesh and Forster, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the first data set comprises spatial mesh data and identifies one or more objects in the three-dimensional environment, as taught by Forster, to the teachings of Xu and Partheesh because it may determine the expected landmarks with respect to the subsequent location promptly and more efficiently (e.g. see Forster col 14 lines 39-41).
As for dependent claim 9, Xu and Partheesh teach the method as described in claim 1, but do not specifically teach the following limitation. However, Forster teaches:
wherein the one or more sensors comprises one or more depth sensors and wherein the sensor data comprises depth sensor data [(e.g. see Forster col 4 lines 16-22) ”the device may perform an initialization process to obtain 3D data of the environment and store the data in the landmark database. For example, it may use … depth sensors to compute the position and orientation of objects in the environment”].
The motivation to combine is the same as that used for claim 8.
Response to Arguments
Applicant's arguments, filed 22 December 2025, have been fully considered but they are not persuasive.
Applicant argues that [“Xu does not disclose individual applications with different spatial restrictions as recited in claim 1 as presently amended … a second boundary that is different from the first boundary” (Pages 12-14).].
The argument described above, in paragraph number 9, with respect to the newly added limitations to the independent claims has been considered, but is moot in view of the new grounds of rejection.
Citation of Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. PGPub 2014/0059695 A1 issued to Parecki et al. on 27 February 2014. The subject matter disclosed therein is pertinent to that of claims 1-17 and 19-22 (e.g. ability for different applications to deliver a subset of content based on spatial location).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174