Prosecution Insights
Last updated: April 19, 2026
Application No. 18/037,131

Riding Tool Identification Method and Device

Non-Final OA §103
Filed
May 16, 2023
Examiner
ABDULLAEV, ERKIN SHAVKATOVICH
Art Unit
2648
Tech Center
2600 — Communications
Assignee
Honor Device Co., Ltd.
OA Round
3 (Non-Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
7 granted / 8 resolved
+25.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/13/2026 has been entered. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CHINA 202210006782.4, filed on January 05, 2022, has been filed in Application No. 18/037,131, filed on May 16, 2023. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. It is also noted that the present application is a 371 National Phase Patent Application of PCT/CN2022/139339, for which the 371(c) filing date is December 15, 2022. Response to Arguments The Applicant’s arguments with respect to claim(s) 20-38 have been considered but they are not persuasive. Applicant argues Han does not teach “wherein the public transportation voice broadcast during ride is recognized based on a broadcast frequency of the voice signal and ” as recited in claims 20, 37 and 38. However, Han suggest obtaining voice signal (Fig.4:440, pars.82, 92, 95, 96); recognizing a public transportation voice broadcast during ride (Fig.4:440, par.97), recognizing the voice broadcast during based on broadcast frequency of the voice signal (Fig.4:440, par.97, “…The VAD module 142 may detect the voice signal at the second pre-set time intervals which is shorter than the first pre-set time intervals with respect to the audio signal which is received consecutively in time.”). LI discloses the broadcast frequency thresholds of different riding tools (page 5, paragraph 4). The motivation, citations, and explanations are provided in Claim Rejections - 35 USC § 103. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 20, 28-29, and 31-38 are rejected under 35 U.S.C. 103 as being unpatentable over LI (CN 110516760 A) (IDS. see translation submitted 8/20/2025) in view of Mohammed (US-20150071090-A1) in further view of HAN (US-20220199110-A1). Regarding Claim 20, LI discloses A method, comprising: obtaining at least one of an acceleration signal acquired by an acceleration sensor (page 4, paragraph 5, inertial sensor IMU) in an electronic device or a magnetometer signal (page 4, paragraph 5, geomagnetic sensor) acquired by a magnetometer sensor in the electronic device (page 4, paragraph 3, Fig.1:101, "the situation recognition device can be executed by the terminal configuration of," and page 4, paragraph 4, "obtaining the first initial data collected by the one or more first sensors." and page 4, paragraph 5, "wherein the first sensor can be…inertial sensor IMU,…a geomagnetic sensor" (i.e., obtaining data from sensors located in a terminal.)); identifying a riding tool (page 4, paragraph 8, context type) based on at least one of an acceleration feature or a magnetometer feature (page 4, paragraph 8, Fig.1:102, "step 102, performing fusion process to the first initial data to obtain context type corresponding to the first initial data and the type confidence" and page 5, paragraph 8, "the context type may include a passenger airplane, passenger trains, passenger automobile, riding the subway, bus, driving a automobile…" (i.e., examiner reading "riding tool" as a vehicle type and context type covers the idea of identifying vehicle.)), to obtain a riding classification result (page 4, paragraph 9, type confidence) (page 4, paragraph 9, "In some embodiments of the invention, type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." (i.e., examiner points to page 5, par.4 where an example is explained wherein sensor data is obtain and even though the sensor is a microphone it could be for inertial sensor or geomagnetic sensor outputs a result of context type.)), wherein the acceleration feature is obtained based on the acceleration signal and the magnetometer feature is obtained based on the magnetometer signal (page 4, paragraph 5, "wherein the first sensor can be…inertial sensor IMU,…a geomagnetic sensor" and page 4, paragraph 8, Fig.1:102, "step 102, performing fusion process to the first initial data to obtain context type corresponding to the first initial data and the type confidence" (i.e., the first initial data refers to the first sensors and those sensors are inertial sensor IMU and geomagnetic sensor.)); recognizing a (page 7, paragraph 1, "extracting the driving sound of the rail car, sound of different people speaking. contextual characteristics data reporting station by voice sound and the like, as the context feature data of low power consumption microphone," (i.e., LI discloses of listening to voice and sounds obtained from the microphone.)), wherein the (page 5, paragraph 4, "identifying user voice data collected with the IMU using the microphone…obtaining the user riding the subway confidence is 0.4" and page 7, paragraph 1, "the microphone with low power consumption of performing identification processing, identifying the user may be on the bus station, train, subway, railway…" (i.e., Page 7, par.1 teaches “broadcast frequency thresholds of different riding tools” as the system is identifying the user on a different vehicle based on the sounds collected from the microphone and there must be different thresholds for each vehicle frequency to determine the type of vehicle the user is riding. Pag 5, par.4 is to show an example of sound extracted being extracted and categorized as a riding a subway and no other types of vehicles.)), and the voice broadcast recognition result indicates a category of a riding tool corresponding to the voice signal (page 7, paragraph 1, "…identifying the user maybe on the bus station, train, subway, railway, as the initial motion recognition result;…determine the target situation type on the subway." and page 5, paragraph 8, “the context type may include a passenger airplane, passenger trains, passenger automobile, riding the subway, bus, driving a automobile…” (i.e., LI discloses using the voice broadcast result to determine situation type. Situation type is reading on “category tool”)). determining a category (page 5, paragraph 4, situation type) of the riding tool based on the riding classification result and the (page 5, paragraph 4, "…the traffic behavior of inertial data identifying user voice data collected with the IMU using the microphone, the inertia data collected by the voice data collected by the microphone with the IMU for fusion processing, obtaining the user riding the subway confidence is 0.4,…target context type is riding or riding subway (target context type)." and page 5, paragraph 6, "wherein the target context type is that situation type of terminal current environment most likely belongs to." and page 5, paragraph 8, “the context type may include a passenger airplane, passenger trains, passenger automobile, riding the subway, bus, driving a automobile…” (i.e., using inertial data and voice data to determine a category. Page 5, paragraph 8, explains context type is airplane, train, automobile, subway. Therefore, context type refers to data of sensors captured and identifying the vehicle and situation type refers to “a category”.)). However, LI does not disclose switching a microphone in the electronic device from off to on when detecting that the electronic device is in a riding state; performing a public transportation voice broadcast recognition operation using the microphone to obtain a public transportation voice broadcast recognition result, wherein the public transportation voice broadcast recognition operation comprises: obtaining a voice signal acquired by the microphone in the electronic device, and extracting a voice feature based on the voice signal; and recognizing a public transportation voice broadcast during ride based on the voice feature, to obtain the public transportation voice broadcast recognition result, wherein the public transportation voice broadcast during ride is recognized based on a broadcast frequency of the voice signal . Mohammed discloses switching a microphone in the electronic device from off to on when detecting that the electronic device is in a riding state (paragraph [0035], Fig.3, "If the accelerometer movement indication corresponds to the vehicle device mode (YES at 304), then the electronic device 100 activates (306, 308, 310) a set of second sensors…For simplicity, the set of second sensors as shown in the example of FIG. 3 includes only three sensors. Additional sensors (e.g., the audio sensor 115, the Bluetooth sensor 116, or other sensors) or fewer sensors may be used in other implementations." and paragraph [0024], "The audio sensor 115 provides an audio recording feature to the electronic device 100. Examples of the audio sensor 115 include a microphone or other audio capture device." and paragraph [0032], “After updating the current device mode, the electronic device 100 activates or deactivates one or more of the sensors 110 to scan for movement indications for a next device mode. In one example, the electronic device 100 toggles between device modes (e.g., between a vehicle device mode and a walking device mode) by alternating between configurations of activated or deactivated sensors 110.” And paragraph [0033], “While the sensors 110 are selectively activated or deactivated for the purpose of determining the device mode, the sensors 110 can also be used for other features based on inputs from the operating system or the user of the electronic device 100. For example, the GPS sensor 112 is used in conjunction with a map application, the WiFi sensor 113 is used for wireless data transfer, or the audio sensor 115 is used to record speech.” (i.e., Activating the audio sensor 115 when the accelerometer movement is a riding state.)). LI and Mohammed are considered to be analogous to the claimed invention because they are in the same User interfaces specially adapted for cordless or mobile telephones…according to context-related or environment-related conditions. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified LI to implement the use of additional sensors such as described in Mohammed (Fig.2, Fig.3) in order to efficiently use sensors to conserve the smartphone’s battery life (Mohammed, paragraph [0010], “it is desirable to efficiently use these methods to conserve the smartphone's battery life while increasing accuracy of the vehicle mode determination.”). However, LI in view of Mohammed do not disclose performing a public transportation voice broadcast recognition operation using the microphone to obtain a public transportation voice broadcast recognition result, wherein the public transportation voice broadcast recognition operation comprises: obtaining a voice signal acquired by the microphone in the electronic device, and extracting a voice feature based on the voice signal; and recognizing a public transportation voice broadcast during ride based on the voice feature, to obtain the public transportation voice broadcast recognition result, wherein the public transportation voice broadcast during ride is recognized based on a broadcast frequency of the voice signal . HAN discloses performing a public transportation voice broadcast recognition operation using the microphone to obtain a public transportation voice broadcast recognition result (paragraph [0082], "As a specific example, the processor 140 may be configured to combine, based on the detected voice signal (e.g., “This stop is Samseong Station.”) being determined as including the voice signal for guiding the stop (e.g., “Samseong Station”), information on the stop (e.g., “Samseong Station”) included in the detected voice signal to a specific position of a pre-set sentence (e.g., “This stop is XXX.”).…" and paragraph [0095], "Referring to FIGS. 3 and 4, the processor 140 may be configured to receive an audio signal through the microphone 110 (S410)." and paragraph [0096], Fig.4, "The processor 140 may be configured to determine, through the acoustic scene classification (ASC) module 141, whether the user is on-board the public transport 200 based on the audio signal (S420). Here, the ASC module 141 may input the audio signal to the artificial intelligence model (i.e., the artificial intelligence model trained to determine whether the user is on-board the public transport 200 based on the audio signal on the inside environment of the public transport 200) stored in the memory 130, and determine whether the user is on-board the public transport 200 through the output data." (i.e., HAN discloses listening to the broadcast message such as stop information.)), wherein the public transportation voice broadcast recognition operation comprises (Fig.3, Fig.4): obtaining a voice signal acquired by a microphone in the electronic device, and extracting a voice feature based on the voice signal (paragraph [0095], Figs.3 and 4, "Referring to FIGS. 3 and 4, the processor 140 may be configured to receive an audio signal through the microphone 110 (S410)." and paragraph [0096], “Here, the ASC module 141 may input the audio signal to the artificial intelligence model (i.e., the artificial intelligence model trained to determine whether the user is on-board the public transport 200 based on the audio signal on the inside environment of the public transport 200) stored in the memory 130,” (i.e., Examiner points to Fig.4:410 wherein the audio signal is received and extracted to put into an artificial intelligence model.)); and recognizing a public transportation voice broadcast during ride based on the voice feature, to obtain the public transportation voice broadcast recognition result (paragraph [0098], Fig.4:450, "The processor 140 may be configured to determine, through a voice anti spoofing (VAS) module 143, whether the detected voice signal is a voice signal output through the acoustic device 250 of the public transport 200 (S450)." (i.e., determine if the sound input is a from a public transportation voice broadcast. LI discloses obtaining voice feature but examiner is relying on HAN to explicitly teach of obtaining public transportation voice broadcast.)), wherein the public transportation voice broadcast during ride is recognized based on a broadcast frequency of the voice signal, the broadcast frequency indicates how often public transportation voice broadcasts occur during ride (paragraph [0049], "The microphone 110 may be configured to consecutively receive audio signals…" and paragraph [0096], "…In addition, the ASC module 141 may determine whether the user is on-board the public transport 200 at the first pre-set time intervals with respect to the audio signal which is received consecutively in time." and paragraph [0097], "Based on the user being determined as on-board the public transport 200 (S430: Yes), the processor 140 may be configured to detect the audio signal of a section in which the level of the audio signal exceeds the pre-set level as the voice signal through a voice activity detection (VAD) module 142 (S440). The VAD module 142 may detect the voice signal at the second pre-set time intervals which is shorter than the first pre-set time intervals with respect to the audio signal which is received consecutively in time." (i.e., the module checks if the public transportation voice broadcast during ride received "consecutively" in “pre-set time intervals” reads on how often the public transportation broadcasts occur during the ride. Fig.4:430 uses voice data to determine if the user is in the vehicle, and Fig.4:440 is to determine the voice is the public transportation voice broadcast during ride.)). LI in view of Mohammed and HAN are considered to be analogous to the claimed invention because they are in the same Procedures used during a speech recognition process, e.g. man-machine dialogue. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified LI to implement the method of HAN to listen to the public transportation voice broadcast in order to provide the user information such as stop announcement when the surrounding area gets too difficult to hear and also reduces power consumption by receiving audio signals at certain time interval (HAN, paragraph [0041], “However, the ambient sound allow mode may cause inconvenience to the user since this mode may pick up noises (e.g., conversation between others, engine sounds of vehicles, etc.) that may not be relevant to certain important messages. For example, when the user is on-board the public transport 200, the user may experience difficulty in listening to important information such as a name of a stop announced during a ride.” and paragraph [0059], “For example, the processor 140 may be configured to monitor whether the user is on-board the public transport 200 for a relatively long period of time, and reduce power consumption by only processing audio signals received through the microphone 110 at a certain time interval according to a monitoring result.”). Regarding Claim 28, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 20. LI further discloses wherein identifying the riding tool based on at least one of the acceleration feature or the magnetometer feature, to obtain the riding classification result comprises: inputting at least one of the acceleration feature or the magnetometer feature into an artificial intelligence riding classification model (page 7, paragraph 4, situation recognition model) to obtain the riding classification result outputted by the artificial intelligence riding classification model (page 7, paragraph 4, "obtaining the initial data sample and a situation type samples corresponding to the initial data sample and the type confidence sample; then, the initial data samples input to the situation recognition model of the training." (i.e., initial data sample is data from first sensors input into a model to obtain confidence result and the situation type. Page 7, par. 1, shows an example of situation type being in a subway.)), wherein the artificial intelligence riding classification model is obtained by training based on at least one of historical acceleration features and historical magnetometer features of [[the ]]riding tools of different categories (page 7, paragraph 3, "the performing fusion process to the first initial data, further comprising: the first initial data input pre-trained situation recognition model to carry out fusion process to obtain the situation type and the type corresponding to the first initial data confidence." and page 7, paragraph 4, "obtaining the initial data sample and a situation type samples corresponding to the initial data sample…" (i.e., The initial data sample could be the historic data of acceleration feature and historical magnetometer feature. Although does not disclose the specific sensor data, page 4 paragraph 5 disclose multiple sensors that are used to identify context type which leads to situation type so it can be inferred the model was trained on the sensors like inertial sensor IMU and a geomagnetic sensor.)), and the riding classification result outputted by the artificial intelligence riding classification model indicates scores (page 7, paragraph 6, confidence) of the riding tools of different categories (page 5, paragraph 4, "obtaining the user riding the subway confidence is 0.4," and page 7, paragraph 6, "In the embodiment, the first initial data input pre-trained situation recognition model to carry out fusion process to obtain the situation type and the type corresponding to the first initial data confidence, with a recognition confidence to the situation type and type judgment precision is high." (i.e., LI model outputs confidence as shown in an example page 5, paragraph 4.)). Regarding Claim 29, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 20. LI further discloses wherein recognizing the public transportation voice broadcast during ride based on the voice feature, to obtain the voice broadcast recognition result comprises: inputting the voice feature into an artificial intelligence voice type recognition model to obtain the public transportation voice broadcast recognition result outputted by the artificial intelligence voice type recognition model (page 7, paragraph 3, "the performing fusion process to the first initial data, further comprising: the first initial data input pre-trained situation recognition model to carry out fusion process" and page 7, paragraph 4, "the initial data samples input to the situation recognition model of the training. situational type result to obtain the situation recognition model generation" page 8, paragraph 3, “Because the target situation type of noise of different types, therefore, is conducted to the removing-noise process for voice signal better collection of the terminal,” (i.e., LI situation recognition model discloses obtaining first initial data and that first initial data is collected by first sensors including the microphone on page 4 par.4-5. The data is input into the model to recognize the sounds belonging to specific vehicle in order to determine the situation type. LI further discloses a noise model that removes noise to better collect voice signal in order to determine the situation type.)), wherein the artificial intelligence voice type recognition model is obtained by training based on historical voice features of [[the ]]riding tools of different categories (page 7, paragraph 4, "the initial data samples input to the situation recognition model of the training. situational type result to obtain the situation recognition model generation" (i.e., historical voice features is mapping to initial data samples.)), and the public transportation voice broadcast recognition result indicates a category of a riding tool corresponding to the voice feature (page 7, paragraph 1, "extracting the driving sound of the rail car,…contextual characteristics…identifying the user maybe on the bus station, train, subway," and page 7, paragraph 4, "situation type samples corresponding to the initial data sample and the type confidence samples, or, the first similarity context type result situation recognition model is generated every time the situation for the type of sample" (i.e., The voice data extracted is used to identify the category or situation type since the model identifies the riding tool.)). HAN further discloses public transportation voice broadcast (paragraph [0098], “The processor 140 may be configured to determine, through a voice anti spoofing (VAS) module 143, whether the detected voice signal is a voice signal output through the acoustic device 250 of the public transport 200 (S450).” and paragraph [0082], “…based on the detected voice signal (e.g., “This stop is Samseong Station.”) being determined as including the voice signal for guiding the stop (e.g., “Samseong Station”)…” (i.e., Examiner points to the rejection of claim 20 wherein HAN listens for the public transportation voice broadcast.)). inputting the voice feature into an artificial intelligence voice type recognition model to obtain the public transportation voice broadcast recognition result outputted by the artificial intelligence voice type recognition model (paragraph [0087], “The automatic speech recognition (ASR) module 145 may convert the detected voice signal to text (string) of words, phoneme sequence, or the like by using a language model and an acoustic model. The language model may be a model that assigns probability to a word or a phoneme sequence and the acoustic model may be a model representing a relationship between a voice signal and a text of the voice signal. The models may be configured based on a probability statistics or an artificial neural network.” and paragraph [0091], “In an embodiment, the processor 140 may be configured to control, based on the detected voice signal being determined as including the voice signal for guiding the stop,” (i.e., HAN discloses the public transportation voice broadcast is inputting into a model in order to recognize the public transportation voice broadcast and not just any sound.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 31, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 20. LI further discloses wherein recognizing the public transportation voice broadcast during ride based on the voice feature, to obtain the public transportation voice broadcast recognition result comprises: recognizing the public transportation voice broadcast during ride based on key content (page 6, last paragraph, feature extraction) of the voice signal (page 6, last paragraph, "…terminal comprises a microphone with low power consumption and pressure gauge, by performing feature extraction to original data of the microphone," and page 7, paragraph 1, "extracting the driving sound of the rail car, sound of different people speaking." (i.e., extracting features picked up from the microphone like driving sounds.)) and preset key content of different riding tools, to obtain the public transportation voice broadcast recognition result (page 7, paragraph 1, "the microphone with low power consumption of performing identification processing, identifying the user maybe on the bus station, train, subway, railway, as the initial motion recognition result;" (i.e., using the data from the microphone to distinct between multiple riding tools. Page 7, paragraph 4, discloses "present key content" by proving a sample to recognize the sound the vehicle makes by outputting confidence.)), wherein the public transportation voice broadcast recognition result indicates a category of a riding tool corresponding to the voice signal (page 7, paragraph 1, "…identifying the user maybe on the bus station, train, subway, railway, as the initial motion recognition result…determine the target situation type on the subway." (i.e., LI discloses using the voice broadcast result to determine situation type.)). HAN further discloses public transportation voice broadcast (paragraph [0098], “The processor 140 may be configured to determine, through a voice anti spoofing (VAS) module 143, whether the detected voice signal is a voice signal output through the acoustic device 250 of the public transport 200 (S450).” and paragraph [0082], “…based on the detected voice signal (e.g., “This stop is Samseong Station.”) being determined as including the voice signal for guiding the stop (e.g., “Samseong Station”)…” (i.e., Examiner points to the rejection of claim 20 wherein HAN listens for the public transportation voice broadcast.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 32, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 20. LI further discloses wherein determining the category of the riding tool based on the riding classification result and the public transportation voice broadcast recognition result comprises: determining, when a high-speed railway score is the largest in the riding classification result (page 4, paragraph 9, "In some embodiments of the invention, type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 2, "the specific value of the first threshold may be set according to actual application scene. For example, the first threshold value can be 0.8, 0.85, or 0.9." and page 5, paragraph 8, "the context type may include a…passenger trains," and page 6, paragraph 1, "…railway…" (i.e., LI model uses the sensor data like the accelerometer to determine the user is in a train but can also do the same like a passenger train or railway.)), and the high-speed railway score meets a first threshold condition (page 5, paragraph 7, "…the obtained context type and the type corresponding to the first initial data after the confidence, if the type confidence degree is larger than or equal to the first threshold value. it shows that situation type first initial data that has been collected by the terminal through the first sensor judges the terminal current environment…it can directly type the context corresponding to the first initial data as the target scene type," (i.e., Because LI model using the first sensor to judge the current environment and meets a threshold, then LI model does not need to use second sensors like the camera to determine target context type.)), that the riding tool is a high-speed railway (page 5, paragraph 8, "the context type may include a…passenger trains," and page 6, paragraph 1, "…railway…" (i.e., LI disclosure gives an example for a subway is determined to be the riding tool as explained in page 5, par.4 and with page 5, par.8 implies LI model can do the same for other types of vehicles.)); determining, when a metro score is the largest in the riding classification result (page 4, paragraph 9, "…type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 8, "the context type may include…riding the subway," (i.e., same rationale as "the high-speed railway score". Collected data from the first sensor that could be accelerometer or magnetometer LI model determines the subway is the most likely the context type.)), and the metro score meets a second threshold condition, that the riding tool is a metro (page 5, paragraph 4, "the first threshold value is 0.8,…obtaining the user riding the subway confidence is 0.4," and page 7, paragraph 1, "can obtain the terminal environment is the type of subway is confidence is 0.9, iron is of confidence is 0.2, so as to determine the target situation type on the subway." and page 7, paragraph 4, "…if the similarity is less than the first preset similarity threshold, or a second predetermined similarity is less than the similarity threshold, then adjusting the parameters…" (i.e., LI discloses the user is riding a subway since the context type meets a threshold. Although not explicitly state "a second threshold", LI model is trained on different context type and during training sets threshold to associate sensor data with vehicle.)); determining, when the metro score meets a third threshold condition (page 4, paragraph 9, "…type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 8, "the context type may include…riding the subway," and page 5, paragraph 2, "the specific value of the first threshold may be set according to actual application scene..." (i.e., same rationale as "the high-speed railway score". Collected data from the first sensor that could be accelerometer or magnetometer LI model determines the subway is the most likely the context type. page 5, paragraph 2 talks about setting different threshold value based on application scene so there can be multiple threshold condition for the first sensor such as "a third threshold" for subway compared to a passenger train.)), and the public transportation voice broadcast recognition result is a metro broadcast voice, that the riding tool is the metro (page 7, paragraph 1, "extracting the driving sound of the rail car, sound of different people speaking. contextual characteristics data reporting station by voice sound and the like, as the context feature data of low power consumption microphone… identifying the user maybe on the bus station, train, subway, railway," (i.e., LI discloses obtaining sounds broadcasted from vehicle and based on the sounds can determine the riding tool.)); determining, when a bus/car score in the riding classification result meets a fourth threshold condition (page 4, paragraph 9, "In some embodiments of the invention, type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 8, "the context type may include a…passenger automobile…bus," (i.e., LI model can determine a bus and a passenger automobile.)), and the public transportation voice broadcast recognition result is a bus broadcast voice, that the riding tool is a bus (page 5, paragraph 8, "the context type may include a…bus," and page 7, paragraph 1, "extracting the driving sound…identifying the user maybe on the bus station," (i.e., although LI says "bus station" the model can determine page 5, paragraph 8, "riding" as in "riding the subway" so it's possible the context type to be riding the bus.)); and determining, when the bus/car score in the riding classification result is largest, the bus/car score meets a fifth threshold condition (page 4, paragraph 9, "In some embodiments of the invention, type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 8, "the context type may include a…bus," and page 5, paragraph 2, "the specific value of the first threshold may be set according to actual application scene. For example, the first threshold value can be 0.8, 0.85, or 0.9." (i.e., page 5, paragraph 2 talks about setting different threshold value based on application scene so there can be multiple threshold condition for the first sensor.)), and the public transportation voice broadcast recognition result is not the bus broadcast voice and the metro broadcast voice, that the riding tool is a car (page 5, paragraph 8, "the context type may include a…passenger automobile" and page 7, paragraph 1, "extracting the driving sound of the rail car, sound of different people speaking. contextual characteristics data reporting station by voice sound and the like, as the context feature data of low power consumption microphone… (i.e., although claims is directed towards "voice broadcast recognition" is "not the bus…the metro" LI model can identify context type like passenger automobile based on broadcast voice extracted from the microphone that only a car can make or the mobile device can pick up the number of people speaking as the car can't fit the same amount of people.)). HAN further discloses public transportation voice broadcast (paragraph [0098], “The processor 140 may be configured to determine, through a voice anti spoofing (VAS) module 143, whether the detected voice signal is a voice signal output through the acoustic device 250 of the public transport 200 (S450).” and paragraph [0082], “…based on the detected voice signal (e.g., “This stop is Samseong Station.”) being determined as including the voice signal for guiding the stop (e.g., “Samseong Station”)…” (i.e., Examiner points to the rejection of claim 20 wherein HAN listens for the public transportation voice broadcast.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 33, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 32. LI further discloses wherein determining, when the high-speed railway score is the largest in the riding classification result, and the high-speed railway score meets a first threshold condition, that the riding tool is the high-speed railway comprises: determining, when the high-speed railway score is the largest in the riding classification result, the high-speed railway score meets the first threshold condition (page 5, paragraph 7, "…the obtained context type and the type corresponding to the first initial data after the confidence, if the type confidence degree is larger than or equal to the first threshold value. it shows that situation type first initial data that has been collected by the terminal through the first sensor judges the terminal current environment…it can directly type the context corresponding to the first initial data as the target scene type," and page 5, paragraph 8, "the context type may include a…passenger trains," and page 6, paragraph 1, "…railway…" (i.e., Same explanation in the rejection of claim 32.)), and the base station signal comprises a high-speed railway identifier, that the riding tool is the high-speed railway (page 8, paragraph 4, "if the target context type is riding a train, passenger car and riding a subway traffic behavior type, context service the execution corresponding to the target context type may further include: enhanced communication signal of the terminal. wherein the communication signal may include a 4G signal, the 5G signal…" and page 8, paragraph 5, "a communication signal of enhanced terminal can be adjusted by setting the priority of the high-speed rail cell priority" (i.e., context service determines the user is on a train and adjusts setting to keep the user connected with either 4g, 5g and wi-fi.)), wherein the base station signal is acquired by [[s]]a modem processor in the electronic device (page 8, paragraph 4, "enhanced communication signal of the terminal. wherein the communication signal may include a 4G signal, the 5G signal" (i.e., terminal communicating with cell using 4g or 5g.)). Regarding Claim 34, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 29. LI further discloses wherein identifying the riding tool based on the at least one of the acceleration feature or the magnetometer feature, to obtain the riding classification result comprises: identifying the riding tool based on the magnetometer feature and magnetometer thresholds of different riding tools, to obtain the riding classification result (page 4, paragraph 12, "…each of the first initial data may only have one context type and type and the corresponding confidence," and page 5, paragraph 7, "…the obtained context type and the type corresponding to the first initial data after the confidence, if the type confidence degree is larger than or equal to the first threshold value." (i.e., LI model obtains sensor data “magnetometer feature" and inputs into the model where it has threshold that identifies the user riding tool and based on the threshold result outputs confidence level. Page 7, par.4, discuss training the model using thresholds.)). Regarding Claim 35, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 34. LI further discloses wherein determining the category of the riding tool based on the riding classification result and the public transportation voice broadcast recognition result comprises: determining, when the riding classification result is a high-speed railway, that the riding tool is the high-speed railway (page 4, paragraph 9, "In some embodiments of the invention, type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 2, "the specific value of the first threshold may be set according to actual application scene. For example, the first threshold value can be 0.8, 0.85, or 0.9." and page 5, paragraph 8, "the context type may include a…passenger trains," and page 6, paragraph 1, "…railway…" (i.e., LI model uses the sensor data like the accelerometer to determine the user is in a train but can also do the same like a passenger train or railway.)); determining, when the riding classification result is a metro, and the public transportation voice broadcast recognition result is a metro broadcast voice, that the riding tool is the metro (page 7, paragraph 1, "extracting the driving sound of the rail car, sound of different people speaking. contextual characteristics data reporting station by voice sound and the like, as the context feature data of low power consumption microphone… identifying the user maybe on the bus station, train, subway, railway," (i.e., LI discloses obtaining sounds broadcasted from vehicle and based on the sounds can determine the riding tool.)); determining, when the riding classification result is a bus or a car, and the public transportation voice broadcast recognition result is a bus broadcast voice, that the riding tool is the bus (page 4, paragraph 9, "In some embodiments of the invention, type confidence refers to that the terminal current environment belongs to the probability value of the certain situation type." and page 5, paragraph 8, "the context type may include a…bus," and page 7, paragraph 1, "extracting the driving sound…identifying the user maybe on the bus station," (i.e., LI model can determine a bus based on magnetometer as well as microphone.)); and determining, when the riding classification result is the bus or the car, and the public transportation voice broadcast recognition result is not the bus broadcast voice and the metro broadcast voice, that the riding tool is the car (page 5, paragraph 8, "the context type may include a…passenger automobile…bus" and page 7, paragraph 1, "extracting the driving sound of the rail car, sound of different people speaking. contextual characteristics data reporting station by voice sound and the like, as the context feature data of low power consumption microphone… (i.e., although claims is directed towards "voice broadcast recognition" is "not the bus…the metro" LI model can identify context type like passenger automobile based on broadcast voice extracted from the microphone that only a car can make or the mobile device can pick up the number of people speaking as the car can't fit the same amount of people. Claim 35 is similar to claim 32 and most of the rationale for rejection is applied to this claim.)). HAN further discloses public transportation voice broadcast (paragraph [0098], “The processor 140 may be configured to determine, through a voice anti spoofing (VAS) module 143, whether the detected voice signal is a voice signal output through the acoustic device 250 of the public transport 200 (S450).” and paragraph [0082], “…based on the detected voice signal (e.g., “This stop is Samseong Station.”) being determined as including the voice signal for guiding the stop (e.g., “Samseong Station”)…” (i.e., Examiner points to the rejection of claim 20 wherein HAN listens for the public transportation voice broadcast.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 36, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 35. LI further discloses wherein determining the category of the riding tool based on the riding classification result and the public transportation voice broadcast recognition result comprises: determining, when the base station signal comprises a high-speed railway identifier, that the riding tool is the high-speed railway (page 8, paragraph 4, "if the target context type is riding a train, passenger car and riding a subway traffic behavior type, context service the execution corresponding to the target context type may further include: enhanced communication signal of the terminal. wherein the communication signal may include a 4G signal, the 5G signal…" and page 8, paragraph 5, "a communication signal of enhanced terminal can be adjusted by setting the priority of the high-speed rail cell priority" (i.e., context service determines the user is on a train and adjusts setting to keep the user connected with either 4g, 5g and wi-fi.)), wherein the base station signal is acquired by a modem processor in the electronic device (page 8, paragraph 4, "enhanced communication signal of the terminal. wherein the communication signal may include a 4G signal, the 5G signal" and page 10, paragraph 7, “the invention claims a terminal for realizing context identification method, comprising a processor 81,” (i.e., terminal communicating with cell using 4g or 5g.)). HAN further discloses public transportation voice broadcast (paragraph [0098], “The processor 140 may be configured to determine, through a voice anti spoofing (VAS) module 143, whether the detected voice signal is a voice signal output through the acoustic device 250 of the public transport 200 (S450).” and paragraph [0082], “…based on the detected voice signal (e.g., “This stop is Samseong Station.”) being determined as including the voice signal for guiding the stop (e.g., “Samseong Station”)…” (i.e., Examiner points to the rejection of claim 20 wherein HAN listens for the public transportation voice broadcast.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 37, which is similar in scope to claim 20, thus rejected under the same rationale. Regarding Claim 38, which is similar in scope to claim 20, thus rejected under the same rationale. Li discloses a non-transitory computer-readable storage medium (page 2, paragraph 1, “The embodiment of the invention claims a situation recognition method, device, terminal and computer-readable storage medium, which can solve the terminal cannot accurately identify the technical problem of the situation type.”). Claim(s) 21, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over LI (CN 110516760 A) (IDS) in view of Mohammed (US-20150071090-A1) in view of HAN (US-20220199110-A1) in further view of ZHAO (CN-107315519-A) (IDS. see translation submitted 8/20/2025). Regarding Claim 21, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 20. Mohammed further discloses before identifying the riding tool based on the at least one of the acceleration feature or the magnetometer feature, to obtain the riding classification result detecting whether the electronic device is in a riding state (paragraph [0034], Fig.3:302, " The electronic device 100 scans (302) for an accelerometer movement indication with the movement sensor 111. The electronic device 100 receives the accelerometer movement indication and determines (304) whether the accelerometer movement indication corresponds to a vehicle device mode or vehicle movement indication and if not, returns to scan for movement indications (302). " (i.e., Detecting the user movement and examiner is reading its “before identifying the riding tool” because the next steps of Fig.3 are to activate a plurality of sensors.)); continuing to detect, when it is detected that the electronic device is in a non-riding state, whether the electronic device is in the riding state (paragraph [0032], Fig.3, "After updating the current device mode, the electronic device 100 activates or deactivates one or more of the sensors 110 to scan for movement indications for a next device mode. In one example, the electronic device 100 toggles between device modes (e.g., between a vehicle device mode and a walking device mode) by alternating between configurations of activated or deactivated sensors 110." (i.e., Mohammed discloses to either activate or deactivate sensors for the next mode. Thus, the UE continue to detect the user movement to determine if the user is in a riding state as shown in Fig.3.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Examiner further notes (Mohammed, Fig.3) can be used for either detecting the user riding state or also to record speech as disclosed in (Mohammed, Fig.3, par.33). However, LI in view of Mohammed in further view of HAN do not disclose triggering, when it is detected that the electronic device is in the riding state, the electronic device to identify the riding tool, to obtain the riding classification result. ZHAO discloses triggering, when it is detected that the electronic device is in the riding state (page 2, paragraph 7, "detecting the active state of the user switching to the driving state, starting the mobile terminal OS control function;" (i.e., when the user is in driving state trigging the OS control function and that could be "to obtain the riding classification result")), triggering, when it is detected that the electronic device is in the riding state (page 2, paragraph 7, "detecting the active state of the user switching to the driving state, starting the mobile terminal OS control function;" and page 7, paragraph 4, "step S204, using the classification model to identify out the type of current user activity state of the corresponding sensor data." and page 7, paragraph 6, "then inputting the extracted feature vector classification model so as to identify the type of user activity state, such as resting, walking, cycling, running, climbing, car, bus, subway, train, etc." (i.e., when the user is in driving state trigging the OS control function and that could be "to obtain the riding classification result")), the electronic device to identify the riding tool, to obtain the riding classification result (page 2, paragraph 10, "reading at least detection data of a sensor in the mobile terminal;" and page 20, paragraph 5, “each component of embodiments of the present invention may be implemented in hardware, or in a software module running on the one or more processor of implementation, or in a combination thereof. It should be understood by those skilled in the art, in practice use a microprocessor or digital signal processor (DSP) to realize some or all of functions some or all components in the OS switching device under driving state of the invention” (i.e., obtaining sensor data like acceleration or magnetometer data for "riding classification result".)). LI in view of Mohammed in further view of HAN and ZHAO are considered to be analogous to the claimed invention because they are in the same field Procedures used during a speech recognition process, e.g. man-machine dialogue. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified LI terminal device to implement ZHAO model of detecting the user state of riding is because ZHAO model can use additional data from other application to improve accuracy of detection of the user state such as walking driving, running, cycling, and other state as mapped in the rejection of this claim in order to provide the user with the most accurate category of the riding tool (page 10, paragraph 5, “In addition, when detecting the active state of the user according to the detection data of the mobile terminal, in order to improve the accuracy of detection, but also can obtain the mobile terminal of each application of historical data, such as historical data of each APP obtaining GPS of mobile terminal (Positioning System, global positioning system) information and/or mobile terminal is. Besides, it also can obtain the mobile terminal is transmitting data of each transmission tool, such as movement transmission data of upper terminal of Bluetooth and/or WIFI (WIreless-FIdelity).”). Regarding Claim 27, LI in view of Mohammed in further view of HAN discloses all the all the limitation in Claim 21. However, LI in view of Mohammed in further view of HAN do not disclose wherein detecting whether the electronic device is in the riding state comprises: inputting the acceleration feature into an artificial intelligence riding state identification model to obtain a ride identifier outputted by the artificial intelligence riding state identification model, wherein the ride identifier indicates whether the electronic device is in the riding state or the non-riding stat, and the artificial intelligence riding state identification model is obtained by training based on historical acceleration features of riding tools of different categories. ZHAO further discloses wherein detecting whether the electronic device is in the riding state comprises: inputting the acceleration feature into an artificial intelligence riding state identification model (page 7, paragraph 3, user activity state classification model) to obtain a ride identifier (page 7, paragraph 3, user activity) outputted by the artificial intelligence riding state identification model (page 7, paragraph 3, "step S202, extracting the feature vector of the current sensor data, and input type from the characteristic vector of the user activity state classification model;" (i.e., ZHAO discloses collecting sensor data and determine the current activity state using a classification model.)), wherein the ride identifier indicates whether the electronic device is in the riding state or the non-riding state (page 7, paragraph 4, "step S204, using the classification model to identify out the type of current user activity state of the corresponding sensor data." and page 7, paragraph 6, "then inputting the extracted feature vector classification model so as to identify the type of user activity state, such as resting, walking, cycling, running, climbing, car, bus, subway, train, etc." (i.e., ZHAO discloses indicates multiple classification of user activity and that includes riding like in a car or subway, and non-riding like resting and walking.)), and the artificial intelligence riding state identification model is obtained by training based on historical acceleration features of riding tools of different categories (page 7, paragraph 1, "so the detection data input to the machine learning algorithm to train a classification model, then according to the machine learning algorithm to train the classification output result of model, analyzing and obtaining the activity state of the user." (i.e., ZHAO discloses training a classification model on the activity state of the user.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Claim(s) 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over LI (CN 110516760 A) (IDS) in view of Mohammed (US-20150071090-A1) in view of HAN (US-20220199110-A1) in view of ZHAO (CN-107315519-A) in further view of Shao (US-20140244272-A1). Regarding Claim 22, LI in view of Mohammed in view of HAN in further view of ZHAO discloses all the all the limitation in Claim 21. However, LI in view of Mohammed in view of HAN in further view of ZHAO do not disclose controlling on and off of the microphone based on an operating status of the electronic device. Shao discloses controlling on and off of the microphone based on an operating status of the electronic device (paragraph [0114], "In step S101, the microphone may be in an operating state all the time, and thus, as long as there is the first airflow information, it will be detected by the microphone. However, in normal cases, in order to save the power consumption and prevent from collecting invalid airflow information, the microphone may be maintained to be in an off state when the electronic device is in a screen lock state, and the microphone is then controlled to be in the operating state in response to a preset operation. For example, a vibration sensor is set on the microphone. When the vibration sensor detects the airflow information generated by the user using the electronic device, the microphone is controlled to be in an on state, to detect the first airflow information. Of course, in a specific implementation, the preset operation may be an operation of clicking a button, an operation of swiping a preset gesture etc., which is not limited by the embodiments of the present application." (i.e., Controlling the microphone state based on the user interaction with the user device.)). LI in view of Mohammed in view of HAN in further view of ZHAO and Shao are considered to be analogous to the claimed invention because they are in the same Procedures used during a speech recognition process, e.g. man-machine dialogue. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified LI to implement the method of Shao in order to control the microphone during a screen-off state in order to conserve power (Shao, paragraph [0114], “However, in normal cases, in order to save the power consumption and prevent from collecting invalid airflow information, the microphone may be maintained to be in an off state when the electronic device is in a screen lock state,”). Regarding Claim 23, LI in view of Mohammed in view of HAN in view of ZHAO in further view of Shao discloses all the all the limitation in Claim 22. Mohammed turning off the microphone when it is detected that the electronic device is in the non-riding state (paragraph [0026], "The electronic device 100 is configured to use one or more of the plurality of sensors 110 in order to determine a device mode of the electronic device 100. The electronic device 100 may be configured to activate or deactivate the sensors 110 separately, in sets, or simultaneously." and paragraph [0031], "The electronic device 100 compares (214) the overall movement indication with a device mode threshold to determine whether the current device mode should be updated. The device mode threshold may be a fixed percentage or numerical value (e.g., 75% or a numerical value of 80) or other indicator for comparison. Alternatively, the device mode threshold may be dynamic, based on a number of sensors used, a current device mode, or other factors. The device mode thresholds may be the same or different for each device mode. If the overall movement indication does not meet the device mode threshold (NO at 214), the electronic device 100 returns to scan for movement indications (202)." and paragraph [0032], "After updating the current device mode, the electronic device 100 activates or deactivates one or more of the sensors 110 to scan for movement indications for a next device mode." (i.e., Examiner points to Fig.2:206 wherein the microphone is turned on and Fig.2:214 wherein the microphone is turned off. Also, can activate or deactivate after Fig.2:216.)). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 24, the case scenario directed to controlling the on and off of the microphone based on the ride code push situation of the electronic device was given no patentable wight in the claims upon which they depend, hence, the limitation's further defining the optional case scenario are given no patentable weight. Regarding Claim 25, LI in view of Mohammed in view of HAN in view of ZHAO in further view of Shao discloses all the all the limitation in Claim 22. Shao further discloses turning off the microphone when the electronic device is in a screen-off state (paragraph [0114], "In step S101, the microphone may be in an operating state all the time, and thus, as long as there is the first airflow information, it will be detected by the microphone. However, in normal cases, in order to save the power consumption and prevent from collecting invalid airflow information, the microphone may be maintained to be in an off state when the electronic device is in a screen lock state," (i.e., examiner reading "or" limitation as optional and other limitation was not given patentable weight, and Shao discloses to turn off the microphone when the device is screen lock state or "screen-off state.")). The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Claim(s) 26 is rejected under 35 U.S.C. 103 as being unpatentable over LI (CN 110516760 A) (IDS) in view of Mohammed (US-20150071090-A1) in view of HAN (US-20220199110-A1) in view of ZHAO (CN-107315519-A) in further view of LIU (CN 109547624 A) (IDS. see translation submitted 8/20/2025). Regarding Claim 26, LI in view of Mohammed in view of HAN in further view of ZHAO discloses all the all the limitation in Claim 21. However, LI in view of Mohammed in further view of HAN do not disclose wherein detecting whether the electronic device is in the riding state comprises: obtaining a base station signal acquired by a modem processor in the electronic device within a preset time period; determining, based on the quantity of cells passed by the electronic device within the preset time period, whether the electronic device is in the riding state. LIU discloses wherein detecting whether the electronic device is in the riding state comprises: obtaining a base station signal acquired by a modem processor in the electronic device within a preset time period (page 16, paragraph 9, "the scanning connection information in the preset time of the base station by the intelligent mobile terminal," (i.e., the mobile terminal scanning of base station is mapping to "obtaining a base station signal acquired by a modem processor" as in obtaining a signal in a preset time.)); detecting, based on the base station signal, a quantity of cells passed by the electronic device within the preset time period (page 16, paragraph 10, "A61: counting the intelligent mobile terminal in the preset time of the base station scanning connection information in different base station scanning connection number of the information." (i.e., counting the number of different base station the mobile terminal passes within the time period by the connection information as in counting the number of base stations the mobile terminal connected.)); and determining, based on the quantity of cells passed by the electronic device within the preset time period, whether the electronic device is in the riding state (page 16, paragraph 11, "A62: judging whether the intelligent mobile terminal in the preset time of the base station scanning connection information in different base station scanning connection number information is greater than a preset number." and page 16, paragraph 13, "A63: reduced static initial state of said intelligent mobile terminal walking state initial confidence score and confidence score." (i.e., use the number of connections with different base station to determine if the mobile terminal is in the walking state and if number of different base station is above the threshold, then the result is the user is in the riding state.)). LI in view of Mohammed in view of HAN in further view of ZHAO and LIU are analogous to the claimed invention because they are in the same field devices for establishing wireless links to base stations without route selection. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified LI to implement the method of counting the number of base stations the user connected and if it’s above a preset number to update the confidence level because doing so can help LI model to increase its own confidence that the user is moving in a vehicle and is not in a walking and LIU discloses improves the motion state detection (LIU, page 8, paragraph 1, “improves the universality of the motion state detection method.”). Allowable Subject Matter Claims 39 and 40 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Erkin S. Abdullaev whose telephone number is (571)272-4135. The examiner can normally be reached Monday - Friday - 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wesley Kim can be reached at (571)272-7867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ERKIN S. ABDULLAEV Examiner Art Unit 2648 /ERKIN ABDULLAEV/Examiner, Art Unit 2648 /WESLEY L KIM/Supervisory Patent Examiner, Art Unit 2648
Read full office action

Prosecution Timeline

May 16, 2023
Application Filed
Aug 13, 2025
Non-Final Rejection — §103
Nov 19, 2025
Response Filed
Dec 15, 2025
Final Rejection — §103
Feb 13, 2026
Response after Non-Final Action
Feb 20, 2026
Request for Continued Examination
Feb 27, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578413
METHOD FOR POSITIONING USING WIRELESS COMMUNICATION AND ELECTRONIC DEVICE FOR SUPPORTING SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12538116
CELLULAR SERVICE ACTIVATION AND DEACTIVATION ON MOBILE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12498448
ANTI-HOPPING ALGORITHM FOR INDOOR LOCALIZATION SYSTEMS
2y 5m to grant Granted Dec 16, 2025
Patent 12484007
METHOD AND APPARATUS FOR PROCESSING EVENT FOR DEVICE CHANGE
2y 5m to grant Granted Nov 25, 2025
Patent 12445554
METHOD AND DEVICE FOR MANAGING MULTIPLE WIRELESS CONNECTIONS SHARING A LIMITED TRUNK GROUP
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+14.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month