Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 02/02/2026 have been fully considered but they are not persuasive. Regarding claims 1 and 16, Applicant argues (pg. 8 of the Remarks) that Resnick does not teach creating a packaged content file comprising one or more portions of the video file and supplemental data, that Resnick simply makes the video and haptic data available to the user and thus does not teach packaging content. Examiner respectfully disagrees. Resnick teaches (¶0025) The streaming system 130 may make the haptic data available to the user as part of the downloaded media content (e.g., the haptic data may be downloaded along with the media content); (¶0072) the streaming system 130 may integrate the haptic data 222 in a transport stream used for the audio and video; for further evidence Resnick mentions the following regarding the manifest file: (¶0085) the computing system may transmit the set of haptic data, the one or more audio files, the one or more video files, and the metadata in a transport stream. In one embodiment, the computing system may generate a manifest file that includes the set of haptic data, the one or more audio files, the one or more video files, and the metadata, and the computing system may transmit the manifest file. Examiner notes: that applicant could further define packaged content (e.g., a container (MPR, AVI, WebM) and/or clarify what the “portions of the video file” consists of (e.g., video content/video data itself, metadata associated with the video (e.g., name, size, timestamps, format headers), path/address that identifies where the file is stored) to overcome the references cited.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 7-10, 13, 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Resnick et al. (US 20230044961, hereinafter Resnick) in view of Robertson et al. (US 20210385264, hereinafter Robertson.)
Regarding claim 1, “A method, comprising: receiving, by a device, a portion of a video file” Resnick teaches (Fig. 1) a computing device with a haptic component; (¶0024) haptic component 160 receives the media content.
As to “generating supplemental data associated with the portion of the video file” Resnick teaches (¶0023) the haptic component 160 is configured to use machine learning (ML) techniques to generate haptic data based on audio content, video content, or combinations of audio and video content.
As to “and creating a packaged content…, wherein the packaged content … comprises one or more portions of the video file and supplemental data associated with the one or more portions of the video file.” Resnick teaches (¶0025) Once haptic data for the media content is generated, the haptic component 160 can send the haptic data to the streaming system 130. The streaming system 130 may make the haptic data available to the user as part of the downloaded media content (e.g., the haptic data may be downloaded along with the media content); (¶0072) the streaming system 130 may integrate the haptic data 222 in a transport stream used for the audio and video; (¶0085) the computing system may transmit the set of haptic data, the one or more audio files, the one or more video files, and the metadata in a transport stream. In one embodiment, the computing system may generate a manifest file that includes the set of haptic data, the one or more audio files, the one or more video files, and the metadata, and the computing system may transmit the manifest file.
Resnick does not teach “at a time earlier than encryption” and a packaged content “file.” However, Robertson teaches (¶0035, ¶0057) Content packager 104 receives primary content, e.g., from one or more content sources 10. The primary content may include content encoded in a plurality of bitrates and formats. Content packager 104 generates a primary manifest data structure that includes data describing content available at system 100 for access by client devices 200. For example, the manifest data structure may describe available content segments, each segment pertaining to a portion of content separately available for access and subsequent playback at a client device 200. Content packager 104 also generates the segments files referenced in the primary manifest data structure in a format suitable for client device 200, e.g., HLS, HSS, MPEG-DASH, or the like. Optionally, content packager 104 may encrypt the segment files, e.g., using a session key provided by content controller 102. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick with the encryption as taught by Robertson for the benefit of addressing/preventing illicit interception/consumption of content.
Regarding claim 2, “The method of claim 1, further comprising sending the supplemental data associated with the video file to a second device.” Resnick teaches (¶0025) Once haptic data for the media content is generated, the haptic component 160 can send the haptic data to the streaming system 130. The streaming system 130 may make the haptic data available to the user as part of the downloaded media content (e.g., the haptic data may be downloaded along with the media content);
Regarding claim 3, “The method of claim 1, … further comprises a manifest; wherein the manifest references both the video file and the supplemental data.” Resnick teaches (¶0072-¶0073) delivers segmented haptic data manifests as a sidecar to the client system 110. For example, timed haptic segmented manifests can be delivered as sidecar to video manifests. The segmented manifests can then be used for video playback.
Resnick does not teach “wherein the packaged content file” further comprises a manifest. However, Robertson teaches (¶0035) packager includes manifest data; (¶0047) the alternative content may be included as bonus content (e.g., a child asset) that supplements the primary content (e.g., a feature asset); (¶0043, ¶0044) Alternative content manifest generator 108 generates an alternative content manifest data structure that includes data describing alternative content segments available at system 100 for access by client devices 200. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick with the manifest packager as taught by Robertson for the benefit of bundling related assets in a structure that is easily parsed by client devices.
Regarding claim 4, “The method of claim 1, wherein the supplemental data comprises a time range and at least one of: a color palette, a mood, one or more theme characterizations, and an advertiser palette.” Resnick teaches (¶0075) haptic event tags (or keywords) (associated with haptic data 422) can be inserted into the timestamped series of metadata within the speech marks file 410. In such embodiments, the resulting speech marks file 410 includes a timestamped series of metadata describing at least one of speech, animations, emotional states, or haptic event tags (or keywords)
Regarding claim 7, “The method of claim 1, wherein generating the supplemental data comprises: analyzing the video file; determining a time position of the video file; and determining supplemental data associated with both the video file and the time position of the video file comprising one or more of: keyword(s), mood, and color palate.” Resnick teaches (¶0075) haptic data 422 that is generated using one or more techniques described herein to the client system 110. The haptic data 422 is an example representation of haptic data 220 or haptic data 222 described with respect to FIG. 2. Here, one or more haptic event tags (or keywords) (associated with haptic data 422) can be inserted into the timestamped series of metadata within the speech marks file 410. In such embodiments, the resulting speech marks file 410 includes a timestamped series of metadata describing at least one of speech, animations, emotional states, or haptic event tags (or keywords).
Regarding claim 8, “A method, comprising: receiving, by a first device, a request for content” Resnick teaches (¶0020 and ¶0063) a request for media content.
As to “analyzing the content, …, to generate supplemental data associated with the content” Resnick teaches (¶0023) the haptic component 160 is configured to use machine learning (ML) techniques to generate haptic data based on audio content, video content, or combinations of audio and video content.
As to “and sending, to a second device, the content and the supplemental data associated with the content.” Resnick teaches (¶0025) Once haptic data for the media content is generated, the haptic component 160 can send the haptic data to the streaming system 130. The streaming system 130 may make the haptic data available to the user as part of the downloaded media content (e.g., the haptic data may be downloaded along with the media content).
Resnick does not teach “at an earlier time than encryption of the content” However, Robertson teaches (¶0035, ¶0057) Optionally, content packager 104 may encrypt the segment files, e.g., using a session key provided by content controller 102. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick with the encryption as taught by Robertson for the benefit of addressing/preventing illicit interception/consumption of content.
Regarding claim 9, “The method of claim 8, wherein the supplemental data comprises a time marker of the content and at least one of: a content label associated with the time marker and the content; a content description associated with the time marker and the content; content keyword(s) associated with the time marker and the content; a color analysis associated with the time marker and the content; and content mood(s) associated with the time marker and the content.” Resnick teaches (¶0075) haptic data 422 that is generated using one or more techniques described herein to the client system 110. The haptic data 422 is an example representation of haptic data 220 or haptic data 222 described with respect to FIG. 2. Here, one or more haptic event tags (or keywords) (associated with haptic data 422) can be inserted into the timestamped series of metadata within the speech marks file 410. In such embodiments, the resulting speech marks file 410 includes a timestamped series of metadata describing at least one of speech, animations, emotional states, or haptic event tags (or keywords).
Regarding claim 10, “The method of claim 8, wherein the content and the supplemental data are sent in a packaged content…; further comprising creating a manifest, wherein the manifest collocates in time the supplemental data and the content.” Resnick teaches (¶0072-¶0073) delivers segmented haptic data manifests as a sidecar to the client system 110. For example, timed haptic segmented manifests can be delivered as sidecar to video manifests. The segmented manifests can then be used for video playback.
Resnick does not teach a packaged content “file” and “wherein the packaged content file further comprises the manifest.” However, Robertson teaches (¶0035) Content packager 104 receives primary content, e.g., from one or more content sources 10. The primary content may include content encoded in a plurality of bitrates and formats. Content packager 104 generates a primary manifest data structure that includes data describing content available at system 100 for access by client devices 200. For example, the manifest data structure may describe available content segments, each segment pertaining to a portion of content separately available for access and subsequent playback at a client device 200. Content packager 104 also generates the segments files referenced in the primary manifest data structure in a format suitable for client device 200, e.g., HLS, HSS, MPEG-DASH, or the like. Optionally, content packager 104 may encrypt the segment files, e.g., using a session key provided by content controller 102; (¶0047) the alternative content may be included as bonus content (e.g., a child asset) that supplements the primary content (e.g., a feature asset); (¶0043, ¶0044) Alternative content manifest generator 108 generates an alternative content manifest data structure that includes data describing alternative content segments available at system 100 for access by client devices 200. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick with the manifest packager as taught by Robertson for the benefit of bundling related assets in a structure that is easily parsed by client devices.
Regarding claim 13, “The method of claim 8, wherein analyzing the content comprises determining a time marker and determining one or more of: one or more keywords, a mood, a place theme, a character theme, and color analysis.” Resnick teaches (¶0075) haptic event tags (or keywords) (associated with haptic data 422) can be inserted into the timestamped series of metadata within the speech marks file 410. In such embodiments, the resulting speech marks file 410 includes a timestamped series of metadata describing at least one of speech, animations, emotional states, or haptic event tags (or keywords)
Regarding claim 15, Resnick does not teach “The method of claim 8, wherein the sending is via one of hypertext transfer protocol (HTTP) live streaming (HLS) or dynamic adaptive streaming over HTTP (DASH).” However, Robertson further teaches (¶0035) Content packager 104 also generates the segments files referenced in the primary manifest data structure in a format suitable for client device 200, e.g., HLS, HSS, MPEG-DASH, or the like. Optionally, content packager 104 may encrypt the segment files, e.g., using a session key provided by content controller 102. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick to utilize HLS/MPEG-DASH as taught by Robertson for the benefit of allowing clients to select their bitrate/quality, allowing for smoother playback with less buffering and better quality when bandwidth allows.
Regarding claim 16, “A method, comprising: receiving, by a first device, a video file and supplemental data associated with the video file; generating a package …comprising the video file and the supplemental data; and sending the package … to a second device.” Resnick teaches (¶0025) Once haptic data for the media content is generated, the haptic component 160 can send the haptic data to the streaming system 130. The streaming system 130 may make the haptic data available to the user as part of the downloaded media content (e.g., the haptic data may be downloaded along with the media content); (¶0072) the streaming system 130 may integrate the haptic data 222 in a transport stream used for the audio and video.
Resnick does not teach a package “file” and “wherein encryption of the video file is subsequent to the generation of the supplemental data.” However, Robertson teaches (¶0035, ¶0057) Content packager 104 receives primary content, e.g., from one or more content sources 10. The primary content may include content encoded in a plurality of bitrates and formats. Content packager 104 generates a primary manifest data structure that includes data describing content available at system 100 for access by client devices 200. For example, the manifest data structure may describe available content segments, each segment pertaining to a portion of content separately available for access and subsequent playback at a client device 200. Content packager 104 also generates the segments files referenced in the primary manifest data structure in a format suitable for client device 200, e.g., HLS, HSS, MPEG-DASH, or the like. Optionally, content packager 104 may encrypt the segment files, e.g., using a session key provided by content controller 102. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick with the encryption as taught by Robertson for the benefit of addressing/preventing illicit interception/consumption of content.
Regarding claim 17, “The method of claim 16, further comprising creating a manifest that references the video file and the supplemental data.” Resnick teaches (¶0072-¶0073) delivers segmented haptic data manifests as a sidecar to the client system 110. For example, timed haptic segmented manifests can be delivered as sidecar to video manifests. The segmented manifests can then be used for video playback.
Resnick does not teach “and wherein the packaged file further comprises the manifest.” However, Robertson teaches (¶0035) packager includes manifest data; (¶0047) the alternative content may be included as bonus content (e.g., a child asset) that supplements the primary content (e.g., a feature asset); (¶0043, ¶0044) Alternative content manifest generator 108 generates an alternative content manifest data structure that includes data describing alternative content segments available at system 100 for access by client devices 200. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick with the manifest packager as taught by Robertson for the benefit of bundling related assets in a structure that is easily parsed by client devices.
Regarding claim 18, Resnick does not teach “The method of claim 16, wherein the sending is via one of hypertext transfer protocol (HTTP) live streaming (HLS) or dynamic adaptive streaming over HTTP (DASH).” However, Robertson further teaches (¶0035) Content packager 104 also generates the segments files referenced in the primary manifest data structure in a format suitable for client device 200, e.g., HLS, HSS, MPEG-DASH, or the like. Optionally, content packager 104 may encrypt the segment files, e.g., using a session key provided by content controller 102. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick to utilize HLS/MPEG-DASH as taught by Robertson for the benefit of allowing clients to select their bitrate/quality, allowing for smoother playback with less buffering and better quality when bandwidth allows.
Regarding claim 19, “The method of claim 16, wherein the first device is a packager and the second device is an origin server, a content delivery network (CDN), or a video player.” Robertson further teaches (¶0035) a packager and client device; (¶0027) client device is a video player.
Claim(s) 5, 12, is/are rejected under 35 U.S.C. 103 as being unpatentable over Resnick and Robertson in view of Asarikuniyil et al. (US 20220369002, hereinafter Asarikuniyil.)
Regarding claim 5, Resnick and Robertson do not teach “The method of claim 1, wherein the video file received at a time earlier than encryption is raw video or encoded video.” However, Asarikuniyil teaches (¶0040) raw data 202c that may be associated with a content item (e.g., images, video, audio, text, etc.) may be provided as an input to an encoding processor 206c. The encoding processor 206c may process the raw data 202c to generate one or more samples 210c. A ML processor 218c may process the samples 210c using the Machine Learning model 214c to generate a classification 222c for the content item. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick to use RAW data as input as taught by Asarikuniyil for the benefit of analyzing high quality data that data that hasn’t loss details to pre-processing/compression which allows the model to discover the most relevant patterns.
Regarding claim 12, Resnick and Robertson do not teach “The method of claim 8, wherein the content is raw video or encoded video.” However, Asarikuniyil teaches (¶0040) raw data 202c that may be associated with a content item (e.g., images, video, audio, text, etc.) may be provided as an input to an encoding processor 206c. The encoding processor 206c may process the raw data 202c to generate one or more samples 210c. A ML processor 218c may process the samples 210c using the Machine Learning model 214c to generate a classification 222c for the content item. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick to use RAW data as input as taught by Asarikuniyil for the benefit of analyzing high quality data that data that hasn’t loss details to pre-processing/compression which allows the model to discover the most relevant patterns.
Claim(s) 6, 11, is/are rejected under 35 U.S.C. 103 as being unpatentable over Resnick and Robertson in view of Wu et al. (US 12363398, hereinafter Wu.)
Regarding claim 6, Resnick and Robertson do not teach “The method of claim 1, wherein the packaged content file further comprises advertising supplemental data.” However, Wu teaches (4:1-20manifests generated by, e.g., packager 112b are provided to an ad insertion service 114. Ad insertion service may replace references to segments based on parameters associated with a client device requesting the manifest. Thus, advertisement content that is more relevant to a user of the client device may be provided for playback. Marker 106 may signal opportunities to replace segments with advertisement content. While a single marker 106 has been described in reference to FIG. 1, in some implementations, markers are present in pairs, one signaling the start of an opportunity to insert advertisements, and the other marker indicating the end of the opportunity. In other embodiments the first marker may indicate various metadata for inserting advertisement content, e.g., a duration of the ad break. Segments 110d and 110e illustrate replacements of segments 110b and 110c with advertisement content. Content provided to client devices. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick and Robertson to include advertisement content as taught by Wu for the benefit of monetizing the content.
Regarding claim 11, Resnick and Robertson do not teach “The method of claim 8, further comprising: receiving advertising insertion data, wherein the advertising insertion data comprises advertising video and advertising supplemental data; inserting the advertising insertion data into a packaged content file; and sending the packaged content file to the second device.” However, Wu teaches (4:1-20manifests generated by, e.g., packager 112b are provided to an ad insertion service 114. Ad insertion service may replace references to segments based on parameters associated with a client device requesting the manifest. Thus, advertisement content that is more relevant to a user of the client device may be provided for playback. Marker 106 may signal opportunities to replace segments with advertisement content. While a single marker 106 has been described in reference to FIG. 1, in some implementations, markers are present in pairs, one signaling the start of an opportunity to insert advertisements, and the other marker indicating the end of the opportunity. In other embodiments the first marker may indicate various metadata for inserting advertisement content, e.g., a duration of the ad break. Segments 110d and 110e illustrate replacements of segments 110b and 110c with advertisement content. Content provided to client devices. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Resnick and Robertson to include advertisement content as taught by Wu for the benefit of monetizing the content.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Resnick and Robertson in view of Goldberg et al. (US 20240031047, hereinafter Goldberg) and Ravula (US 20110197223.)
Regarding claim 14, Resnick and Robertson do not teach “The method of claim 8, further comprising: receiving emergency alert system (EAS) data; generating EAS supplemental data based on the EAS data” However, Goldberg teaches (¶0015) The electronic device may receive a signal (for example, an ATSC 1.0 signal or an ATSC 3.0 signal) from an Emergency Alert System (EAS) that may include one or more of a broadcast system or an Internet-based system. The electronic device may extract emergency information (for example, a text, an audio signal, a video signal, or information that may correspond to an emergency alert) from the received signal. The electronic device may determine an external device (for example, a lighting fixture, a handheld mobile device, a wearable device, or a handsfree device) that may be communicatively coupled to the electronic device. The electronic device may control the external device to generate a type of sensory feedback (for example, a visual feedback or a somatosensory feedback) that may correspond to an emergency alert. The type of sensory feedback may be generated based on at least a portion of the emergency information; (¶0017) haptic sensory feedback; (¶0033-¶0034) sensory feedback includes a change in a color of ambient lighting. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Resnick and Robertson with the EAS supplemental data generation as taught by Goldberg for the benefit of more effectively warning a viewer/user of an emergency alert.
Resnick, Robertson, and Goldberg do not teach “and inserting the EAS data and the EAS supplemental data into the packaged content file.” However, Ravula teaches (¶0033) The EAS Event Notification and related data are provided to a recorder/slicer ("R/S") which is a device capable of receiving broadcast content in real time, and providing an "asset package" or "package." It can also be referred to as a real-time asset package generator. A "package" is a video asset with associated meta-data that is structured to be compliant with an industry standard; (¶0034) the digital video asset generated based on the emergency information is called hereafter an EAS asset. The EAS asset combined with the meta-data is called an EAS package. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Resnick, Robertson, and Goldberg with the EAS packaging as taught by Ravula for the benefit of efficient distribution and interpretation of the data.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Resnick and Robertson in view of Goldberg.
Regarding claim 20, Resnick and Robertson do not teach “The method of claim 16, further comprising: receiving emergency alert system (EAS) data; generating effects based on the EAS data; and causing one or more of the one or more output devices to change states based on the generated effects based on the EAS data.” However, Goldberg teaches (¶0015) The electronic device may receive a signal (for example, an ATSC 1.0 signal or an ATSC 3.0 signal) from an Emergency Alert System (EAS) that may include one or more of a broadcast system or an Internet-based system. The electronic device may extract emergency information (for example, a text, an audio signal, a video signal, or information that may correspond to an emergency alert) from the received signal. The electronic device may determine an external device (for example, a lighting fixture, a handheld mobile device, a wearable device, or a handsfree device) that may be communicatively coupled to the electronic device. The electronic device may control the external device to generate a type of sensory feedback (for example, a visual feedback or a somatosensory feedback) that may correspond to an emergency alert. The type of sensory feedback may be generated based on at least a portion of the emergency information; (¶0017) haptic sensory feedback; (¶0033-¶0034) sensory feedback includes a change in a color of ambient lighting. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Resnick and Robertson with the EAS supplemental data generation as taught by Goldberg for the benefit of more effectively warning a viewer/user of an emergency alert.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tsukagoshi (US 20180332322) – (¶0016) a video encoding unit configured to generate a video stream having video data with first resolution; a subtitle encoding unit configured to generate a subtitle stream having subtitle bitmap data with second resolution lower than the first resolution; a transmission unit configured to transmit a container including the video stream and the subtitle stream, in a predetermined format
Tsukagoshi (US 20180255270) – (¶0089) the TS formatter 116 transport-packetizes and multiplexes the video stream generated by the video encoder 112, the audio stream generated by the audio encoder 113, and the subtitle stream generated by the subtitle encoder 115, thereby obtaining a transport stream TS as a multiplexed stream. Here, the transport stream TS includes a TS packet that is a container packet obtained by packetizing each of a video stream, an audio stream and a subtitle stream.
Knight et al. (US 20120014674) – (¶0072) the process 20 can also involve packing (30) the audio/video/subtitle data into one or more container files
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK J JOHNSON whose telephone number is (571)272-9629. The examiner can normally be reached 9:00AM-3:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian T. Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Frank Johnson/Primary Examiner, Art Unit 2425