beyond_笑谈

  • 2025-01-13
  • 回复了主题帖: 安装e2 studio时出错

    e2 studio 安装起来是挺费劲,我当时也是安装、卸载折腾好几次。可以参考一下当时的记录: 【Follow me第二季第3期】e2 studio开发环境安装不顺利的坛友看过来~~~ - DigiKey得捷技术专区 - 电子工程世界-论坛

  • 2025-01-08
  • 回复了主题帖: 有没有在马来西亚,墨西哥,开生产代工厂的同行?你们的不良率和成本控制的怎么样?

    qwqwqw2088 发表于 2025-1-8 16:59 国外生产的质量依然没有苏州好。   那大概主要是那些方面呢 工艺问题 还是焊接材料 ... 主要是人的问题。教会徒弟饿死师傅,并且中国产线员工纪律性强

  • 回复了主题帖: 有没有在马来西亚,墨西哥,开生产代工厂的同行?你们的不良率和成本控制的怎么样?

    胖子峰 发表于 2025-1-8 14:43 要看什么产品吧!我做的大部分是工业和车载的,基本是在中国NPI,再移交到墨西哥,马来西亚工厂,培养当 ... 我当时做的是商用产品,但是属于专业级别(工业级设计,专业级价格),产品生产量很大。

  • 回复了主题帖: PCB设计,层选择

    直接选中,更改为需要的层别就行。导出gerber时是按照不同层别的。

  • 回复了主题帖: linux系统串口终端软件显示异常解决方法

    通过SecureCRT,在板子的系统中编辑脚本或者文件时,经常会出现类似显示和编辑页面不一致的情况,请问怎么处理?

  • 2025-01-07
  • 发表了主题帖: 《Linux内核深度解析》 进程调度和多线程管理

    在Linux内核中,3个系统调用可以用来创建新的进程。 fork(分叉):子进程是父进程的一个副本,采用了写时复制的技术 vfork:用于创建子进程,之后子进程立即调用execve以装载新程序的情况。为了避免复制物理页,父进程会睡眠等待子进程装载新程序。现在fork采用了写时复制的技术,vfork失去了速度优势,已经被废弃。 clone(克隆):可以精确地控制子进程和父进程共享哪些资源。这个系统调用的主要用处是可供pthread库用来创建线程。其中clone是功能最齐全的函数,参数多,使用复杂,fork是clone的简化函数。 函数_do_fork,代码如下 long _do_fork(unsigned long clone_flags, //克隆标志 unsigned long stack_start, //用作指定新线程的用户栈的起始地址 unsigned long stack_size, //指定新线程的用户栈的长度 int __user *parent_tidptr, //新线程保存进程标识符的位置 int __user *child_tidptr, //存放新线程保存进程标识符的位置 unsigned long tls) //指定新线程本地存储的位置      5. 函数copy_process     创建新进程的主要工作由函数copy_process完成,代码如下 static __latent_entropy struct task_struct *copy_process( unsigned long clone_flags, unsigned long stack_start, unsigned long stack_size, int __user *child_tidptr, struct pid *pid, int trace, unsigned long tls, int node) { int retval; struct task_struct *p; if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS)) return ERR_PTR(-EINVAL); if ((clone_flags & (CLONE_NEWUSER|CLONE_FS)) == (CLONE_NEWUSER|CLONE_FS)) return ERR_PTR(-EINVAL); /* * Thread groups must share signals as well, and detached threads * can only be started up within the thread group. */ if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND)) return ERR_PTR(-EINVAL); /* * Shared signal handlers imply shared VM. By way of the above, * thread groups also imply shared VM. Blocking this case allows * for various simplifications in other code. */ if ((clone_flags & CLONE_SIGHAND) && !(clone_flags & CLONE_VM)) return ERR_PTR(-EINVAL); /* * Siblings of global init remain as zombies on exit since they are * not reaped by their parent (swapper). To solve this and to avoid * multi-rooted process trees, prevent global and container-inits * from creating siblings. */ if ((clone_flags & CLONE_PARENT) && current->signal->flags & SIGNAL_UNKILLABLE) return ERR_PTR(-EINVAL); /* * If the new process will be in a different pid or user namespace * do not allow it to share a thread group with the forking task. */ if (clone_flags & CLONE_THREAD) { if ((clone_flags & (CLONE_NEWUSER | CLONE_NEWPID)) || (task_active_pid_ns(current) != current->nsproxy->pid_ns_for_children)) return ERR_PTR(-EINVAL); } retval = security_task_create(clone_flags); if (retval) goto fork_out; retval = -ENOMEM; p = dup_task_struct(current, node); if (!p) goto fork_out; /* * This _must_ happen before we call free_task(), i.e. before we jump * to any of the bad_fork_* labels. This is to avoid freeing * p->set_child_tid which is (ab)used as a kthread's data pointer for * kernel threads (PF_KTHREAD). */ p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? child_tidptr : NULL; /* * Clear TID on mm_release()? */ p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? child_tidptr : NULL; ftrace_graph_init_task(p); rt_mutex_init_task(p); #ifdef CONFIG_PROVE_LOCKING DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled); DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled); #endif retval = -EAGAIN; if (atomic_read(&p->real_cred->user->processes) >= task_rlimit(p, RLIMIT_NPROC)) { if (p->real_cred->user != INIT_USER && !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) goto bad_fork_free; } current->flags &= ~PF_NPROC_EXCEEDED; retval = copy_creds(p, clone_flags); if (retval < 0) goto bad_fork_free; /* * If multiple threads are within copy_process(), then this check * triggers too late. This doesn't hurt, the check is only there * to stop root fork bombs. */ retval = -EAGAIN; if (nr_threads >= max_threads) goto bad_fork_cleanup_count; delayacct_tsk_init(p); /* Must remain after dup_task_struct() */ p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE); p->flags |= PF_FORKNOEXEC; INIT_LIST_HEAD(&p->children); INIT_LIST_HEAD(&p->sibling); rcu_copy_process(p); p->vfork_done = NULL; spin_lock_init(&p->alloc_lock); init_sigpending(&p->pending); p->utime = p->stime = p->gtime = 0; #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME p->utimescaled = p->stimescaled = 0; #endif prev_cputime_init(&p->prev_cputime); #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN seqcount_init(&p->vtime_seqcount); p->vtime_snap = 0; p->vtime_snap_whence = VTIME_INACTIVE; #endif #if defined(SPLIT_RSS_COUNTING) memset(&p->rss_stat, 0, sizeof(p->rss_stat)); #endif p->default_timer_slack_ns = current->timer_slack_ns; task_io_accounting_init(&p->ioac); acct_clear_integrals(p); posix_cpu_timers_init(p); p->start_time = ktime_get_ns(); p->real_start_time = ktime_get_boot_ns(); p->io_context = NULL; p->audit_context = NULL; cgroup_fork(p); #ifdef CONFIG_NUMA p->mempolicy = mpol_dup(p->mempolicy); if (IS_ERR(p->mempolicy)) { retval = PTR_ERR(p->mempolicy); p->mempolicy = NULL; goto bad_fork_cleanup_threadgroup_lock; } #endif #ifdef CONFIG_CPUSETS p->cpuset_mem_spread_rotor = NUMA_NO_NODE; p->cpuset_slab_spread_rotor = NUMA_NO_NODE; seqcount_init(&p->mems_allowed_seq); #endif #ifdef CONFIG_TRACE_IRQFLAGS p->irq_events = 0; p->hardirqs_enabled = 0; p->hardirq_enable_ip = 0; p->hardirq_enable_event = 0; p->hardirq_disable_ip = _THIS_IP_; p->hardirq_disable_event = 0; p->softirqs_enabled = 1; p->softirq_enable_ip = _THIS_IP_; p->softirq_enable_event = 0; p->softirq_disable_ip = 0; p->softirq_disable_event = 0; p->hardirq_context = 0; p->softirq_context = 0; #endif p->pagefault_disabled = 0; #ifdef CONFIG_LOCKDEP p->lockdep_depth = 0; /* no locks held yet */ p->curr_chain_key = 0; p->lockdep_recursion = 0; #endif #ifdef CONFIG_DEBUG_MUTEXES p->blocked_on = NULL; /* not blocked yet */ #endif #ifdef CONFIG_BCACHE p->sequential_io = 0; p->sequential_io_avg = 0; #endif /* Perform scheduler related setup. Assign this task to a CPU. */ retval = sched_fork(clone_flags, p); if (retval) goto bad_fork_cleanup_policy; retval = perf_event_init_task(p); if (retval) goto bad_fork_cleanup_policy; retval = audit_alloc(p); if (retval) goto bad_fork_cleanup_perf; /* copy all the process information */ shm_init_task(p); retval = security_task_alloc(p, clone_flags); if (retval) goto bad_fork_cleanup_audit; retval = copy_semundo(clone_flags, p); if (retval) goto bad_fork_cleanup_security; retval = copy_files(clone_flags, p); if (retval) goto bad_fork_cleanup_semundo; retval = copy_fs(clone_flags, p); if (retval) goto bad_fork_cleanup_files; retval = copy_sighand(clone_flags, p); if (retval) goto bad_fork_cleanup_fs; retval = copy_signal(clone_flags, p); if (retval) goto bad_fork_cleanup_sighand; retval = copy_mm(clone_flags, p); if (retval) goto bad_fork_cleanup_signal; retval = copy_namespaces(clone_flags, p); if (retval) goto bad_fork_cleanup_mm; retval = copy_io(clone_flags, p); if (retval) goto bad_fork_cleanup_namespaces; retval = copy_thread_tls(clone_flags, stack_start, stack_size, p, tls); if (retval) goto bad_fork_cleanup_io; if (pid != &init_struct_pid) { pid = alloc_pid(p->nsproxy->pid_ns_for_children); if (IS_ERR(pid)) { retval = PTR_ERR(pid); goto bad_fork_cleanup_thread; } } #ifdef CONFIG_BLOCK p->plug = NULL; #endif #ifdef CONFIG_FUTEX p->robust_list = NULL; #ifdef CONFIG_COMPAT p->compat_robust_list = NULL; #endif INIT_LIST_HEAD(&p->pi_state_list); p->pi_state_cache = NULL; #endif /* * sigaltstack should be cleared when sharing the same VM */ if ((clone_flags & (CLONE_VM|CLONE_VFORK)) == CLONE_VM) sas_ss_reset(p); /* * Syscall tracing and stepping should be turned off in the * child regardless of CLONE_PTRACE. */ user_disable_single_step(p); clear_tsk_thread_flag(p, TIF_SYSCALL_TRACE); #ifdef TIF_SYSCALL_EMU clear_tsk_thread_flag(p, TIF_SYSCALL_EMU); #endif clear_all_latency_tracing(p); /* ok, now we should be set up.. */ p->pid = pid_nr(pid); if (clone_flags & CLONE_THREAD) { p->exit_signal = -1; p->group_leader = current->group_leader; p->tgid = current->tgid; } else { if (clone_flags & CLONE_PARENT) p->exit_signal = current->group_leader->exit_signal; else p->exit_signal = (clone_flags & CSIGNAL); p->group_leader = p; p->tgid = p->pid; } p->nr_dirtied = 0; p->nr_dirtied_pause = 128 >> (PAGE_SHIFT - 10); p->dirty_paused_when = 0; p->pdeath_signal = 0; INIT_LIST_HEAD(&p->thread_group); p->task_works = NULL; cgroup_threadgroup_change_begin(current); /* * Ensure that the cgroup subsystem policies allow the new process to be * forked. It should be noted the the new process's css_set can be changed * between here and cgroup_post_fork() if an organisation operation is in * progress. */ retval = cgroup_can_fork(p); if (retval) goto bad_fork_free_pid; /* * Make it visible to the rest of the system, but dont wake it up yet. * Need tasklist lock for parent etc handling! */ write_lock_irq(&tasklist_lock); /* CLONE_PARENT re-uses the old parent */ if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) { p->real_parent = current->real_parent; p->parent_exec_id = current->parent_exec_id; } else { p->real_parent = current; p->parent_exec_id = current->self_exec_id; } klp_copy_process(p); spin_lock(&current->sighand->siglock); /* * Copy seccomp details explicitly here, in case they were changed * before holding sighand lock. */ copy_seccomp(p); /* * Process group and session signals need to be delivered to just the * parent before the fork or both the parent and the child after the * fork. Restart if a signal comes in before we add the new process to * it's process group. * A fatal signal pending means that current will exit, so the new * thread can't slip out of an OOM kill (or normal SIGKILL). */ recalc_sigpending(); if (signal_pending(current)) { retval = -ERESTARTNOINTR; goto bad_fork_cancel_cgroup; } if (unlikely(!(ns_of_pid(pid)->nr_hashed & PIDNS_HASH_ADDING))) { retval = -ENOMEM; goto bad_fork_cancel_cgroup; } if (likely(p->pid)) { ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace); init_task_pid(p, PIDTYPE_PID, pid); if (thread_group_leader(p)) { init_task_pid(p, PIDTYPE_PGID, task_pgrp(current)); init_task_pid(p, PIDTYPE_SID, task_session(current)); if (is_child_reaper(pid)) { ns_of_pid(pid)->child_reaper = p; p->signal->flags |= SIGNAL_UNKILLABLE; } p->signal->leader_pid = pid; p->signal->tty = tty_kref_get(current->signal->tty); /* * Inherit has_child_subreaper flag under the same * tasklist_lock with adding child to the process tree * for propagate_has_child_subreaper optimization. */ p->signal->has_child_subreaper = p->real_parent->signal->has_child_subreaper || p->real_parent->signal->is_child_subreaper; list_add_tail(&p->sibling, &p->real_parent->children); list_add_tail_rcu(&p->tasks, &init_task.tasks); attach_pid(p, PIDTYPE_PGID); attach_pid(p, PIDTYPE_SID); __this_cpu_inc(process_counts); } else { current->signal->nr_threads++; atomic_inc(&current->signal->live); atomic_inc(&current->signal->sigcnt); list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group); list_add_tail_rcu(&p->thread_node, &p->signal->thread_head); } attach_pid(p, PIDTYPE_PID); nr_threads++; } total_forks++; spin_unlock(&current->sighand->siglock); syscall_tracepoint_update(p); write_unlock_irq(&tasklist_lock); proc_fork_connector(p); cgroup_post_fork(p); cgroup_threadgroup_change_end(current); perf_event_fork(p); trace_task_newtask(p, clone_flags); uprobe_copy_process(p, clone_flags); return p; bad_fork_cancel_cgroup: spin_unlock(&current->sighand->siglock); write_unlock_irq(&tasklist_lock); cgroup_cancel_fork(p); bad_fork_free_pid: cgroup_threadgroup_change_end(current); if (pid != &init_struct_pid) free_pid(pid); bad_fork_cleanup_thread: exit_thread(p); bad_fork_cleanup_io: if (p->io_context) exit_io_context(p); bad_fork_cleanup_namespaces: exit_task_namespaces(p); bad_fork_cleanup_mm: if (p->mm) mmput(p->mm); bad_fork_cleanup_signal: if (!(clone_flags & CLONE_THREAD)) free_signal_struct(p->signal); bad_fork_cleanup_sighand: __cleanup_sighand(p->sighand); bad_fork_cleanup_fs: exit_fs(p); /* blocking */ bad_fork_cleanup_files: exit_files(p); /* blocking */ bad_fork_cleanup_semundo: exit_sem(p); bad_fork_cleanup_security: security_task_free(p); bad_fork_cleanup_audit: audit_free(p); bad_fork_cleanup_perf: perf_event_free_task(p); bad_fork_cleanup_policy: #ifdef CONFIG_NUMA mpol_put(p->mempolicy); bad_fork_cleanup_threadgroup_lock: #endif delayacct_tsk_free(p); bad_fork_cleanup_count: atomic_dec(&p->cred->user->processes); exit_creds(p); bad_fork_free: p->state = TASK_DEAD; put_task_stack(p); free_task(p); fork_out: return ERR_PTR(retval); } //end copy_process              6.    唤醒新进程         函数wake_up_new_task负责唤醒刚刚创建的新进程,代码如下          void wake_up_new_task(struct task_struct *p) { struct rq_flags rf; struct rq *rq; raw_spin_lock_irqsave(&p->pi_lock, rf.flags); p->state = TASK_RUNNING; #ifdef CONFIG_SMP /* * Fork balancing, do it here and not earlier because: * - cpus_allowed can change in the fork path * - any previously selected CPU might disappear through hotplug * * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq, * as we're not fully set-up yet. */ __set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0)); #endif rq = __task_rq_lock(p, &rf); update_rq_clock(rq); post_init_entity_util_avg(&p->se); activate_task(rq, p, ENQUEUE_NOCLOCK); p->on_rq = TASK_ON_RQ_QUEUED; trace_sched_wakeup_new(p); check_preempt_curr(rq, p, WF_FORK); #ifdef CONFIG_SMP if (p->sched_class->task_woken) { /* * Nothing relies on rq->lock after this, so its fine to * drop it. */ rq_unpin_lock(rq, &rf); p->sched_class->task_woken(rq, p); rq_repin_lock(rq, &rf); } #endif task_rq_unlock(rq, p, &rf); } // end of wake_up_new_task           7.   新进程第一次运行         新进程第一次运行是从函数ret_from_fork开始执行,函数ret_from_fork是由各种处理器架构自定义的函数,ARM64架构定义的函数ret_from_fork代码如下 tsk .req x28 // current thread_info ENTRY(ret_from_fork) bl schedule_tail cbz x19, 1f // not a kernel thread mov x0, x20 blr x19 1: get_thread_info tsk b ret_to_user ENDPROC(ret_from_fork)  

  • 回复了主题帖: 有没有在马来西亚,墨西哥,开生产代工厂的同行?你们的不良率和成本控制的怎么样?

    分享一个自己工作过的一家公司真实案例(美资,行业top 1行列)。新产品在苏州NPI成熟之后,到美国和墨西哥量产,但是国外生产的质量依然没有苏州好。

  • 回复了主题帖: 线性稳压器与开关稳压器的区别是什么?

    木犯001号 发表于 2025-1-7 17:54 单纯从PCB上看不出来吧,都得上工具测 线性稳压器和开关电源的拓扑结构区分很明显,实在不行就查芯片型号(DC-DC),AC-DC从变压器就看得出来

  • 回复了主题帖: 投票啦:图像处理、通信"小说"、仓颉编程、Altium书籍,先上哪一本?(人民邮电赞助)

    《图像处理与计算机视觉实践——基于OpenCV和Python》

  • 2025-01-02
  • 回复了主题帖: 开关电源

    开关电源涉及的知识点挺多的,模电和数电肯定是基础。各分立元件的特性参数,电路拓扑结构原理得搞清楚,接着就是分析现成的电路开始

  • 回复了主题帖: 方块电容如何测的内部的温升?

    电解电容打孔,之前还真没碰到过,这么弄不影响电解液泄露么?

  • 回复了主题帖: 开关电源硬件设计如何入门

    模拟电路,数字电路,尤其是分立元器件的特性参数先搞懂,接着从学习现有的电路开始

  • 2024-12-31
  • 回复了主题帖: 大家做的产品里面国产化率是多少?

    freebsder 发表于 2024-12-31 11:36 坑多吗?2020年的时候搞过一段时间国产化,买的东西回来坑一堆。 坑多,设计时感觉一切无忧,打样时芯片没货,测试时性能异常

  • 2024-12-26
  • 回复了主题帖: 大家做的产品里面国产化率是多少?

    现在大多数工业和商用的控制器都可以国产化了,不过设计前要多预研

  • 2024-12-25
  • 回复了主题帖: 示波器触发电平和上升沿时间的准确捕捉

    beyond_笑谈 发表于 2024-12-25 18:51 之前采用横河示波器,可以多信号触发,设置触发电压和触发点,时间设置为us,触发后保存截图和原始波形都可 ... 想起来参加是德科技线上直播会,提到是德科技的示波器也没问题

  • 回复了主题帖: 示波器触发电平和上升沿时间的准确捕捉

    之前采用横河示波器,可以多信号触发,设置触发电压和触发点,时间设置为us,触发后保存截图和原始波形都可以。

  • 回复了主题帖: 【EEWorld邀你来拆解】第16期——车窗控制器拆解

    从图片和元器件看来,这个控制器的成本挺低啊,用在哪个品牌的汽车上?

  • 回复了主题帖: Altium Designe 25.1.2

    AD更新挺快啊,不过最早的AD09和后来的AD16用起来感觉最顺手

  • 2024-12-23
  • 回复了主题帖: 大佬们上午好:请问有什么好的方法检测IO口是否一直处于高电平?

    Tacking 发表于 2024-12-23 14:24 您的意思是不是用另一个引脚去轮询这个输入捕获引脚的状态?如果轮循到状态长时间不变,就是风扇不转了。 ... 对,就是这个意思。而且反相器的驱动能力大一点,可以直接驱动LED显示状态。

  • 回复了主题帖: 大佬们上午好:请问有什么好的方法检测IO口是否一直处于高电平?

    Tacking 发表于 2024-12-23 11:36 是这样的:我现在用一个输入捕获通道通过多路复用器分别获取四个直流风扇的FG信号的方波,读取出风扇的转 ... 在MCU的信号输入捕捉通道接反相器,采用MCU另一个引脚读取这个信号状态,同时反相器也能驱动LED    

统计信息

已有149人来访过

  • 芯积分:2284
  • 好友:1
  • 主题:62
  • 回复:1222

留言

你需要登录后才可以留言 登录 | 注册


sagesun 2024-11-27
您好,我正在做个串口tx、rx转lora的小东西,但是用上拉的串口tx rx灯偶尔会导致数据错误,在【大佬们,我想给串口tx rx加个指示灯该加在哪里 https://bbs.eeworld.com.cn/thread-1276294-1-1.html】这里看到您的评论说加触发器不会影响信号质量,请问是那种触发器
查看全部