DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-18 and 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Layton et al (US 20030031334 A1) in view of Ek et al (US 20140010391 A1) in view of Sunder et al (US 20200107149 A1) and in further view of Mehra (US 10555106 B1).
With respect to claim 1, Layton discloses a data generation method, wherein the method comprises:
obtaining spatial object information for generating azimuth information of a spatial object (fig.1 #4) relative to a data generation apparatus (fig.6 #65; a data generation apparatus such as a mobile phone may be used in conjunction with the method, Par.[0111])(Par.[0050] rendering engine #12 obtains spatial object information or “location information” of audio tracks that are associated with spatial objects, such as the “Object of Interest” #4 of figure 1);
generating content information for describing the spatial object (Par.[0069] audio content may be created by audio clip creation unit #38 for describing spatial objects relative to the user);
generating the azimuth information based on the spatial object information, wherein the azimuth information indicates an azimuth of the spatial position (As shown in figure 1, azimuthal information “ө” describes an azimuthal angle between the user of the data generation apparatus, and the spatial object #4); and
generating, based on the azimuth information and the content information, spatial sound data for playing a spatial sound, wherein a position of a sound source of the spatial sound corresponds to the azimuth information (Par.[0050-0051] rendering engine #12 implements both the location information of the sound source and current orientation of the user, as azimuthal information shown in figure 1, to generate spatial sound data for output via headphones #2 to a user).
Layton does not disclose expressly determining whether a spatial position indicated based on the azimuth information is on a preset spatial direction; generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information.
Ek discloses a data generation method comprising: determining whether a spatial position of a spatial object (fig.1A,B #108; Par.[0038]), indicated based on the azimuth information is on a preset spatial direction (Par.[0040] As shown in figures 1A,1B; when object #108 is determined to lie in the same azimuthal direction (i.e. preset spatial direction) as the user’s head #102 (see fig.1B), it is determined that the object is a gazed object that is on a preset spatial direction); generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information (Par.[0033][0035][0041] based on a user’s gaze direction a selected object may be amplified, which is a volume increase indication; Par.[0101] upon determining that an object is no longer being gazed (i.e. object is not on the preset spatial direction) a stopping of amplification of the object is performed; wherein stopping of amplification is considered a volume decrease indication information).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to amplify or increase the volume of the spatial objects of Layton when a user directs their gaze towards the object, and stop amplifying the objects when the user changes their gaze away from the object, as performed by Ek. The motivation for doing so would have been to increase an awareness of the spatial object when a user directs their gaze towards the object by increasing a sound volume of the object, and discontinue the amplification of the object when the user changes their focus.
Layton does not disclose expressly wherein obtaining the spatial object information comprises locating, by using at least one sound sensor comprised in the data generation apparatus, a spatial position of the spatial object by using a delay estimation positioning method, wherein locating the spatial position comprises locating coordinates of spatial object relative to the data generation apparatus.
Sunder discloses a data generation method comprising: obtaining spatial object information for generating directional information of a spatial object (#118) relative to a data generation apparatus (#102), wherein obtaining the spatial object information comprises locating, by using at least one sound sensor (#104) comprised in the data generation apparatus (Par.[0021-0022]), a spatial position of the spatial object by using a delay estimation positioning method (Par.[0025-0026] a location or spatial position of sound source #118 may be determined based on an interaural time difference (ITD) of sound received at each microphone #104; wherein an ITD is a measure of delay between the microphones); wherein locating the spatial position comprises locating coordinates of spatial object relative to the data generation apparatus (Par.[0056-0069] a Kalman filter model may be used to improve a localization accuracy of the sound source, wherein the model implements a state vector defined by coordinates x1,x2 in a cartesian coordinate axis system; therefor the localization process of Sunder provides locating coordinates of the spatial object #118) and generating, based on the directional information, spatial sound data for playing the spatial sound, wherein a position of a sound source of the spatial sound corresponds to the directional information (Par.[0027] sound spatializer #118 may spatialize the captured sound #120 such that it appears to come from the location of sound source #118).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to obtain the spatial object information of Layton and Ek, by using at least one sound sensor and sound localization process of Sunder. The motivation for doing so would have been capture real-world sounds and spatially reproduce them to a user with respect to a direction of the real-world sound.
The combination of Layton, Ek, and Sunder disclose generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information, wherein the volume decrease information is a stopping of amplification of the object (See Ek: Par.[0101] As detailed above).
The combination of Layton, Ek, and Sunder does not disclose expressly wherein the volume decrease information indicates to decrease a volume of a spatial sound by a target value as a gain parameter.
Mehra discloses generating volume decrease indication information (col.6 ln.66-67; col.7 ln.1-2 “different weights”) in response to determining that a spatial position indicated based on directional information of content information is not on a preset spatial direction (col.3 ln.16-43; col.7 ln.57-62 a direction of a gaze of a user is determined as a “preset direction”), wherein the volume decrease indication information indicates to decrease volume of a spatial sound by a target value, and generating, based on the directional information, content information, and the volume decrease information, spatial sound data for playing the spatial sound, wherein a position of a sound source of the spatial sound corresponds to the directional information, and wherein the spatial sound data comprises the target value as a gain parameter (col.6 ln.56-67; col.7 ln.1-27; a lower weight value “gain parameter” is given to audio signals that deviate from the direction of gaze “preset direction”).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the gain parameter of Mehra as the volume decrease information of Layton, Ek and Sunder. The motivation for doing so would have been to control a volume level of sounds that are not on the preset spatial direction.
With respect to claim 2, Layton discloses the method according to claim 1, wherein generating the azimuth information based on the spatial object information comprises: generating the azimuth information based on the spatial object information and at least one of a position or a posture of the data generation apparatus (Par.[0047-0048] As shown in figure 1 the azimuthal information “ө” is the angle between orientation or position of the user #1 wearing the data generation apparatus and the spatial object #4).
With respect to claim 3, Layton discloses the method according to claim 1, wherein obtaining the spatial object information comprises: receiving the spatial object information (Par.[0077-0078] spatial information regarding spatial objects may be received, such as by a unique URL or dynamically sensed using a sensing system).
With respect to claim 4, Layton discloses the method according to claim 3, wherein receiving the spatial object information comprises receiving the spatial object information in at least one of the following three manners: receiving audio stream data generated by an application program (Par.[0078] audio data is streamed from a network server); receiving interface data generated by an application program; or receiving map data stored on a network side or a terminal side (Par.[0077-0078] URL’s may be mapped to geographic locations).
With respect to claim 5, Layton discloses the method according to claim 1, wherein the spatial object information is obtained using at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a heat-sensitive sensor, a pressure sensor, or an inertial sensor (Par.[0048] “accelerometer”).
With respect to claim 6, Layton discloses the method according to claim 1, wherein generating the content information and the azimuth information based on the spatial object information comprises: generating the content information and the azimuth information based on the spatial object information in response to determining that the spatial object information meets a preset condition (Par.[0050] the location information of the audio tracks may include a preset condition including “how far away” the track should be heard).
With respect to claim 7, Layton discloses the method according to claim 6, wherein the spatial object information comprises a preset spatial position area (Par.[0050] the location information includes a present spatial location that the track should be occur), a preset spatial direction, or preset object content.
With respect to claim 8, Layton discloses the method according to claim 7, wherein the preset spatial direction is a facial orientation of a user wearing a dual-channel headset that is configured to play the spatial sound (Par.[0010] a listener’s head orientation is implemented in determining a current location of the listener in the environment, where the orientation of the head determines a preset direction for selecting a spatial object, as shown in figure 5, Par.[0077]).
With respect to claim 9, Layton discloses the method according to claim 1, wherein the method further comprises: generating volume increase indication information in response to determining that the spatial object information meets a preset condition, wherein the volume increase indication information indicates to increase volume of the spatial sound corresponding to the spatial object information that meets the preset condition (Par.[0056][0083] a volume of sound sources may be adjusted according to a preset condition of a position or orientation of a user in relation to the spatial object).
With respect to claim 10, Layton discloses the method according to claim 9, wherein the spatial object information comprises a preset spatial position area (Par.[0050] the location information includes a present spatial location that the track should be occur), the preset spatial direction, or preset object content.
With respect to claim 11, Layton discloses the method according to claim 10, wherein the preset spatial direction is a facial orientation of a user wearing a dual-channel headset (fig.2 “L,R” channel) that is configured to play the spatial sound (Par.[0010] a listener’s head orientation is implemented in determining a current location of the listener in the environment, where the orientation of the head determines a preset direction for selecting a spatial object, as shown in figure 5, Par.[0077]).
With respect to claim 12, Layton discloses a data generation apparatus, comprising: at least one processor; and one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to perform operations comprising:
obtaining spatial object information for generating azimuth information of a spatial object (fig.1 #4) relative to the data generation apparatus (fig.6 #65; a data generation apparatus such as a mobile phone may be used in conjunction with the method, Par.[0111])(Par.[0050] rendering engine #12 obtains spatial object information or “location information” of audio tracks that are associated with spatial objects, such as the “Object of Interest” #4 of figure 1);
generating content information for describing the spatial object (Par.[0069] audio content may be created by audio clip creation unit #38 for describing spatial objects relative to the user);
generating the azimuth information based on the spatial object information, wherein the azimuth information indicates an azimuth of the spatial position (As shown in figure 1, azimuthal information “ө” describes an azimuthal angle between the user of the data generation apparatus, and the spatial object #4); and
generating, based on the azimuth information and the content information, spatial sound data for playing a spatial sound, wherein a position of a sound source of the spatial sound corresponds to the azimuth information (Par.[0050-0051] rendering engine #12 implements both the location information of the sound source and current orientation of the user, as azimuthal information shown in figure 1, to generate spatial sound data for output via headphones #2 to a user).
Layton does not disclose expressly determining whether a spatial position indicated based on the azimuth information is on a preset spatial direction; generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information.
Ek discloses a data generation method comprising: determining whether a spatial position of a spatial object (fig.1A,B #108; Par.[0038]), indicated based on the azimuth information is on a preset spatial direction (Par.[0040] As shown in figures 1A,1B; when object #108 is determined to lie in the same azimuthal direction (i.e. preset spatial direction) as the user’s head #102 (see fig.1B), it is determined that the object is a gazed object that is on a preset spatial direction); generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information (Par.[0033][0035][0041] based on a user’s gaze direction a selected object may be amplified, which is a volume increase indication; Par.[0101] upon determining that an object is no longer being gazed (i.e. object is not on the preset spatial direction) a stopping of amplification of the object is performed; wherein stopping of amplification is considered a volume decrease indication information).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to amplify or increase the volume of the spatial objects of Layton when a user directs their gaze towards the object, and stop amplifying the objects when the user changes their gaze away from the object, as performed by Ek. The motivation for doing so would have been to increase an awareness of the spatial object when a user directs their gaze towards the object by increasing a sound volume of the object, and discontinue the amplification of the object when the user changes their focus.
Layton does not disclose expressly wherein obtaining the spatial object information comprises locating, by using at least one sound sensor comprised in the data generation apparatus, a spatial position of the spatial object by using a delay estimation positioning method, wherein locating the spatial position comprises locating coordinates of spatial object relative to the data generation apparatus.
Sunder discloses a data generation method comprising: obtaining spatial object information for generating directional information of a spatial object (#118) relative to a data generation apparatus (#102), wherein obtaining the spatial object information comprises locating, by using at least one sound sensor (#104) comprised in the data generation apparatus (Par.[0021-0022]), a spatial position of the spatial object by using a delay estimation positioning method (Par.[0025-0026] a location or spatial position of sound source #118 may be determined based on an interaural time difference (ITD) of sound received at each microphone #104; wherein an ITD is a measure of delay between the microphones); wherein locating the spatial position comprises locating coordinates of spatial object relative to the data generation apparatus (Par.[0056-0069] a Kalman filter model may be used to improve a localization accuracy of the sound source, wherein the model implements a state vector defined by coordinates x1,x2 in a cartesian coordinate axis system; therefor the localization process of Sunder provides locating coordinates of the spatial object #118) and generating, based on the directional information, spatial sound data for playing the spatial sound, wherein a position of a sound source of the spatial sound corresponds to the directional information (Par.[0027] sound spatializer #118 may spatialize the captured sound #120 such that it appears to come from the location of sound source #118).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to obtain the spatial object information of Layton and Ek, by using at least one sound sensor and sound localization process of Sunder. The motivation for doing so would have been capture real-world sounds and spatially reproduce them to a user with respect to a direction of the real-world sound.
The combination of Layton, Ek, and Sunder disclose generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information, wherein the volume decrease information is a stopping of amplification of the object (See Ek: Par.[0101] As detailed above).
The combination of Layton, Ek, and Sunder does not disclose expressly wherein the volume decrease information indicates to decrease a volume of a spatial sound by a target value as a gain parameter.
Mehra discloses generating volume decrease indication information (col.6 ln.66-67; col.7 ln.1-2 “different weights”) in response to determining that a spatial position indicated based on directional information of content information is not on a preset spatial direction (col.3 ln.16-43; col.7 ln.57-62 a direction of a gaze of a user is determined as a “preset direction”), wherein the volume decrease indication information indicates to decrease volume of a spatial sound by a target value, and generating, based on the directional information, content information, and the volume decrease information, spatial sound data for playing the spatial sound, wherein a position of a sound source of the spatial sound corresponds to the directional information, and wherein the spatial sound data comprises the target value as a gain parameter (col.6 ln.56-67; col.7 ln.1-27; a lower weight value “gain parameter” is given to audio signals that deviate from the direction of gaze “preset direction”).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the gain parameter of Mehra as the volume decrease information of Layton, Ek and Sunder. The motivation for doing so would have been to control a volume level of sounds that are not on the preset spatial direction.
With respect to claim 13, Layton discloses the apparatus according to claim 12, wherein the operations comprise: generating the azimuth information based on the spatial object information and at least one of a position or a posture of the data generation apparatus (Par.[0047-0048] As shown in figure 1 the azimuthal information “ө” is the angle between orientation or position of the user #1 wearing the data generation apparatus and the spatial object #4).
With respect to claim 14, Layton discloses the apparatus according to claim 12, wherein the operations comprise: receiving the spatial object information (Par.[0077-0078] spatial information regarding spatial objects may be received, such as by a unique URL or dynamically sensed using a sensing system).
With respect to claim 15, Layton discloses the apparatus according to claim 14, wherein the operations comprise receiving the spatial object information in at least one of the following three manners: receiving audio stream data generated by an application program (Par.[0078] audio data is streamed from a network server); receiving interface data generated by an application program; or receiving map data stored on a network side or a terminal side (Par.[0077-0078] URL’s may be mapped to geographic locations).
With respect to claim 16, Layton discloses the apparatus according to claim 12, wherein the operations comprise: generating the content information and the azimuth information based on the spatial object information in response to determining that the spatial object information meets a preset condition (Par.[0050] the location information of the audio tracks may include a preset condition including “how far away” the track should be heard).
With respect to claim 17, Layton discloses the apparatus according to claim 12, wherein the operations comprise: generating volume increase indication information in response to determining that the spatial object information meets a preset condition, wherein the volume increase indication information indicates to increase volume of the spatial sound corresponding to the spatial object information that meets the preset condition (Par.[0056][0083] a volume of sound sources may be adjusted according to a preset condition of a position or orientation of a user in relation to the spatial object).
With respect to claim 18, Layton discloses the apparatus according to claim 12, wherein the apparatus further comprises a transceiver configured to receive spatial object information (Par.[0111] base station #67 communicates with a transceiver of communication device #65).
With respect to claim 20, Layton discloses a non-transitory computer-readable storage medium, comprising computer instructions, wherein when the computer instructions are run by at least one processor, a data generation apparatus is enabled to perform operations comprising:
obtaining spatial object information for generating azimuth information of a spatial object relative to the data generation apparatus (fig.6 #65; a data generation apparatus such as a mobile phone may be used in conjunction with the method, Par.[0111])(Par.[0050] rendering engine #12 obtains spatial object information or “location information” of audio tracks that are associated with spatial objects, such as the “Object of Interest” #4 of figure 1);
generating content information for describing the spatial object (Par.[0069] audio content may be created by audio clip creation unit #38 for describing spatial objects relative to the user);
generating the azimuth information based on the spatial object information, wherein the azimuth information indicates an azimuth of the spatial position (As shown in figure 1, azimuthal information “ө” describes an azimuthal angle between the user of the data generation apparatus, and the spatial object #4); and
generating, based on the azimuth information and the content information, spatial sound data for playing a spatial sound, wherein a position of a sound source of the spatial sound corresponds to the azimuth information (Par.[0050-0051] rendering engine #12 implements both the location information of the sound source and current orientation of the user, as azimuthal information shown in figure 1, to generate spatial sound data for output via headphones #2 to a user).
Layton does not disclose expressly determining whether a spatial position indicated based on the azimuth information is on a preset spatial direction; generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information.
Ek discloses a data generation method comprising: determining whether a spatial position of a spatial object (fig.1A,B #108; Par.[0038]), indicated based on the azimuth information is on a preset spatial direction (Par.[0040] As shown in figures 1A,1B; when object #108 is determined to lie in the same azimuthal direction (i.e. preset spatial direction) as the user’s head #102 (see fig.1B), it is determined that the object is a gazed object that is on a preset spatial direction); generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information (Par.[0033][0035][0041] based on a user’s gaze direction a selected object may be amplified, which is a volume increase indication; Par.[0101] upon determining that an object is no longer being gazed (i.e. object is not on the preset spatial direction) a stopping of amplification of the object is performed; wherein stopping of amplification is considered a volume decrease indication information).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to amplify or increase the volume of the spatial objects of Layton when a user directs their gaze towards the object, and stop amplifying the objects when the user changes their gaze away from the object, as performed by Ek. The motivation for doing so would have been to increase an awareness of the spatial object when a user directs their gaze towards the object by increasing a sound volume of the object, and discontinue the amplification of the object when the user changes their focus.
Layton does not disclose expressly wherein obtaining the spatial object information comprises locating, by using at least one sound sensor comprised in the data generation apparatus, a spatial position of the spatial object by using a delay estimation positioning method, wherein locating the spatial position comprises locating coordinates of spatial object relative to the data generation apparatus.
Sunder discloses a data generation method comprising: obtaining spatial object information for generating directional information of a spatial object (#118) relative to a data generation apparatus (#102), wherein obtaining the spatial object information comprises locating, by using at least one sound sensor (#104) comprised in the data generation apparatus (Par.[0021-0022]), a spatial position of the spatial object by using a delay estimation positioning method (Par.[0025-0026] a location or spatial position of sound source #118 may be determined based on an interaural time difference (ITD) of sound received at each microphone #104; wherein an ITD is a measure of delay between the microphones); wherein locating the spatial position comprises locating coordinates of spatial object relative to the data generation apparatus (Par.[0056-0069] a Kalman filter model may be used to improve a localization accuracy of the sound source, wherein the model implements a state vector defined by coordinates x1,x2 in a cartesian coordinate axis system; therefor the localization process of Sunder provides locating coordinates of the spatial object #118) and generating, based on the directional information, spatial sound data for playing the spatial sound, wherein a position of a sound source of the spatial sound corresponds to the directional information (Par.[0027] sound spatializer #118 may spatialize the captured sound #120 such that it appears to come from the location of sound source #118).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to obtain the spatial object information of Layton and Ek, by using at least one sound sensor and sound localization process of Sunder. The motivation for doing so would have been capture real-world sounds and spatially reproduce them to a user with respect to a direction of the real-world sound.
The combination of Layton, Ek, and Sunder disclose generating volume decrease indication information in response to determining that the spatial position indicated based on the azimuth information is not on the preset spatial direction, wherein the volume decrease indication information indicates to decrease volume of a spatial sound, and generating the content information based on the volume decrease information, wherein the volume decrease information is a stopping of amplification of the object (See Ek: Par.[0101] As detailed above).
The combination of Layton, Ek, and Sunder does not disclose expressly wherein the volume decrease information indicates to decrease a volume of a spatial sound by a target value as a gain parameter.
Mehra discloses generating volume decrease indication information (col.6 ln.66-67; col.7 ln.1-2 “different weights”) in response to determining that a spatial position indicated based on directional information of content information is not on a preset spatial direction (col.3 ln.16-43; col.7 ln.57-62 a direction of a gaze of a user is determined as a “preset direction”), wherein the volume decrease indication information indicates to decrease volume of a spatial sound by a target value, and generating, based on the directional information, content information, and the volume decrease information, spatial sound data for playing the spatial sound, wherein a position of a sound source of the spatial sound corresponds to the directional information, and wherein the spatial sound data comprises the target value as a gain parameter (col.6 ln.56-67; col.7 ln.1-27; a lower weight value “gain parameter” is given to audio signals that deviate from the direction of gaze “preset direction”).
It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the gain parameter of Mehra as the volume decrease information of Layton, Ek and Sunder. The motivation for doing so would have been to control a volume level of sounds that are not on the preset spatial direction.
With respect to claim 21, Layton discloses the non-transitory computer-readable medium according to claim 20, wherein generating the azimuth information based on the spatial object information comprises: generating the azimuth information based on the spatial object information and at least one of a position or a posture of the data generation apparatus (Par.[0047-0048] As shown in figure 1 the azimuthal information “ө” is the angle between orientation or position of the user #1 wearing the data generation apparatus and the spatial object #4).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-18 and 20-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON R KURR whose telephone number is (571)270-5981. The examiner can normally be reached M-F: 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on (571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JASON R. KURR
Primary Examiner
Art Unit 2695
/JASON R KURR/Primary Examiner, Art Unit 2695